From: liuhongt <hongtao.liu@intel.com>
To: gcc-patches@gcc.gnu.org
Cc: crazylht@gmail.com, hjl.tools@gmail.com, ubizjak@gmail.com,
jakub@redhat.com
Subject: [PATCH 51/62] AVX512FP16: Add vfcmaddcsh/vfmaddcsh/vfcmulcsh/vfmulcsh.
Date: Thu, 1 Jul 2021 14:16:37 +0800 [thread overview]
Message-ID: <20210701061648.9447-52-hongtao.liu@intel.com> (raw)
In-Reply-To: <20210701061648.9447-1-hongtao.liu@intel.com>
gcc/ChangeLog:
* config/i386/avx512fp16intrin.h (_mm_mask_fcmadd_sch):
New intrinsic.
(_mm_mask3_fcmadd_sch): Likewise.
(_mm_maskz_fcmadd_sch): Likewise.
(_mm_fcmadd_sch): Likewise.
(_mm_mask_fmadd_sch): Likewise.
(_mm_mask3_fmadd_sch): Likewise.
(_mm_maskz_fmadd_sch): Likewise.
(_mm_fmadd_sch): Likewise.
(_mm_mask_fcmadd_round_sch): Likewise.
(_mm_mask3_fcmadd_round_sch): Likewise.
(_mm_maskz_fcmadd_round_sch): Likewise.
(_mm_fcmadd_round_sch): Likewise.
(_mm_mask_fmadd_round_sch): Likewise.
(_mm_mask3_fmadd_round_sch): Likewise.
(_mm_maskz_fmadd_round_sch): Likewise.
(_mm_fmadd_round_sch): Likewise.
(_mm_fcmul_sch): Likewise.
(_mm_mask_fcmul_sch): Likewise.
(_mm_maskz_fcmul_sch): Likewise.
(_mm_fmul_sch): Likewise.
(_mm_mask_fmul_sch): Likewise.
(_mm_maskz_fmul_sch): Likewise.
(_mm_fcmul_round_sch): Likewise.
(_mm_mask_fcmul_round_sch): Likewise.
(_mm_maskz_fcmul_round_sch): Likewise.
(_mm_fmul_round_sch): Likewise.
(_mm_mask_fmul_round_sch): Likewise.
(_mm_maskz_fmul_round_sch): Likewise.
* config/i386/i386-builtin.def: Add corresponding new builtins.
* config/i386/sse.md
(avx512fp16_fmaddcsh_v8hf_maskz<round_expand_name>): New expander.
(avx512fp16_fcmaddcsh_v8hf_maskz<round_expand_name>): Ditto.
(avx512fp16_fma_<complexopname>sh_v8hf<mask_scalarcz_name><round_scalarcz_name>):
New define insn.
(avx512fp16_<complexopname>sh_v8hf_mask<round_name>): Ditto.
(avx512fp16_<complexopname>sh_v8hf<mask_scalarc_name><round_scalarcz_name>):
Ditto.
* config/i386/subst.md (mask_scalarcz_name): New.
(mask_scalarc_name): Ditto.
(mask_scalarc_operand3): Ditto.
(mask_scalarcz_operand4): Ditto.
(round_scalarcz_name): Ditto.
(round_scalarc_mask_operand3): Ditto.
(round_scalarcz_mask_operand4): Ditto.
(round_scalarc_mask_op3): Ditto.
(round_scalarcz_mask_op4): Ditto.
(round_scalarcz_constraint): Ditto.
(round_scalarcz_nimm_predicate): Ditto.
(mask_scalarcz): Ditto.
(mask_scalarc): Ditto.
(round_scalarcz): Ditto.
gcc/testsuite/ChangeLog:
* gcc.target/i386/avx-1.c: Add test for new builtins.
* gcc.target/i386/sse-13.c: Ditto.
* gcc.target/i386/sse-23.c: Ditto.
* gcc.target/i386/sse-14.c: Add test for new intrinsics.
* gcc.target/i386/sse-22.c: Ditto.
---
gcc/config/i386/avx512fp16intrin.h | 464 +++++++++++++++++++++++++
gcc/config/i386/i386-builtin.def | 10 +
gcc/config/i386/sse.md | 76 ++++
gcc/config/i386/subst.md | 63 ++++
gcc/testsuite/gcc.target/i386/avx-1.c | 10 +
gcc/testsuite/gcc.target/i386/sse-13.c | 10 +
gcc/testsuite/gcc.target/i386/sse-14.c | 14 +
gcc/testsuite/gcc.target/i386/sse-22.c | 14 +
gcc/testsuite/gcc.target/i386/sse-23.c | 10 +
9 files changed, 671 insertions(+)
diff --git a/gcc/config/i386/avx512fp16intrin.h b/gcc/config/i386/avx512fp16intrin.h
index 9dd71019972..39c10beb1de 100644
--- a/gcc/config/i386/avx512fp16intrin.h
+++ b/gcc/config/i386/avx512fp16intrin.h
@@ -6495,6 +6495,470 @@ _mm512_maskz_fmul_round_pch (__mmask16 __A, __m512h __B,
#endif /* __OPTIMIZE__ */
+/* Intrinsics vf[,c]maddcsh. */
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_mask_fcmadd_sch (__m128h __A, __mmask8 __B, __m128h __C, __m128h __D)
+{
+#ifdef __AVX512VL__
+ return (__m128h) __builtin_ia32_movaps128_mask (
+ (__v4sf)
+ __builtin_ia32_vfcmaddcsh_v8hf_mask_round ((__v8hf) __D,
+ (__v8hf) __A,
+ (__v8hf) __C, __B,
+ _MM_FROUND_CUR_DIRECTION),
+ (__v4sf) __A, __B);
+#else
+ return (__m128h) __builtin_ia32_blendvps ((__v4sf) __A,
+ (__v4sf)
+ __builtin_ia32_vfcmaddcsh_v8hf_mask_round ((__v8hf) __D,
+ (__v8hf) __A,
+ (__v8hf) __C, __B,
+ _MM_FROUND_CUR_DIRECTION),
+ (__v4sf) _mm_set_ss ((float) ((int) __B << 31)));
+#endif
+}
+
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_mask3_fcmadd_sch (__m128h __A, __m128h __B, __m128h __C, __mmask8 __D)
+{
+ return (__m128h) _mm_move_ss ((__m128) __C,
+ (__m128)
+ __builtin_ia32_vfcmaddcsh_v8hf_mask_round ((__v8hf) __C,
+ (__v8hf) __A,
+ (__v8hf) __B, __D,
+ _MM_FROUND_CUR_DIRECTION));
+}
+
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_maskz_fcmadd_sch (__mmask8 __A, __m128h __B, __m128h __C, __m128h __D)
+{
+ return (__m128h)
+ __builtin_ia32_vfcmaddcsh_v8hf_maskz_round((__v8hf) __D,
+ (__v8hf) __B,
+ (__v8hf) __C,
+ __A, _MM_FROUND_CUR_DIRECTION);
+}
+
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_fcmadd_sch (__m128h __A, __m128h __B, __m128h __C)
+{
+ return (__m128h)
+ __builtin_ia32_vfcmaddcsh_v8hf_round((__v8hf) __C,
+ (__v8hf) __A,
+ (__v8hf) __B,
+ _MM_FROUND_CUR_DIRECTION);
+}
+
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_mask_fmadd_sch (__m128h __A, __mmask8 __B, __m128h __C, __m128h __D)
+{
+#ifdef __AVX512VL__
+ return (__m128h) __builtin_ia32_movaps128_mask (
+ (__v4sf)
+ __builtin_ia32_vfmaddcsh_v8hf_mask_round ((__v8hf) __D,
+ (__v8hf) __A,
+ (__v8hf) __C, __B,
+ _MM_FROUND_CUR_DIRECTION),
+ (__v4sf) __A, __B);
+#else
+ return (__m128h) __builtin_ia32_blendvps ((__v4sf) __A,
+ (__v4sf)
+ __builtin_ia32_vfmaddcsh_v8hf_mask_round ((__v8hf) __D,
+ (__v8hf) __A,
+ (__v8hf) __C, __B,
+ _MM_FROUND_CUR_DIRECTION),
+ (__v4sf) _mm_set_ss ((float) ((int) __B << 31)));
+#endif
+}
+
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_mask3_fmadd_sch (__m128h __A, __m128h __B, __m128h __C, __mmask8 __D)
+{
+ return (__m128h) _mm_move_ss ((__m128) __C,
+ (__m128)
+ __builtin_ia32_vfmaddcsh_v8hf_mask_round ((__v8hf) __C,
+ (__v8hf) __A,
+ (__v8hf) __B, __D,
+ _MM_FROUND_CUR_DIRECTION));
+}
+
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_maskz_fmadd_sch (__mmask8 __A, __m128h __B, __m128h __C, __m128h __D)
+{
+ return (__m128h)
+ __builtin_ia32_vfmaddcsh_v8hf_maskz_round((__v8hf) __D,
+ (__v8hf) __B,
+ (__v8hf) __C,
+ __A, _MM_FROUND_CUR_DIRECTION);
+}
+
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_fmadd_sch (__m128h __A, __m128h __B, __m128h __C)
+{
+ return (__m128h)
+ __builtin_ia32_vfmaddcsh_v8hf_round((__v8hf) __C,
+ (__v8hf) __A,
+ (__v8hf) __B,
+ _MM_FROUND_CUR_DIRECTION);
+}
+
+#ifdef __OPTIMIZE__
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_mask_fcmadd_round_sch (__m128h __A, __mmask8 __B, __m128h __C,
+ __m128h __D, const int __E)
+{
+#ifdef __AVX512VL__
+ return (__m128h) __builtin_ia32_movaps128_mask (
+ (__v4sf)
+ __builtin_ia32_vfcmaddcsh_v8hf_mask_round ((__v8hf) __D,
+ (__v8hf) __A,
+ (__v8hf) __C,
+ __B, __E),
+ (__v4sf) __A, __B);
+#else
+ return (__m128h) __builtin_ia32_blendvps ((__v4sf) __A,
+ (__v4sf)
+ __builtin_ia32_vfcmaddcsh_v8hf_mask_round ((__v8hf) __D,
+ (__v8hf) __A,
+ (__v8hf) __C,
+ __B, __E),
+ (__v4sf) _mm_set_ss ((float) ((int) __B << 31)));
+#endif
+}
+
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_mask3_fcmadd_round_sch (__m128h __A, __m128h __B, __m128h __C,
+ __mmask8 __D, const int __E)
+{
+ return (__m128h) _mm_move_ss ((__m128) __C,
+ (__m128)
+ __builtin_ia32_vfcmaddcsh_v8hf_mask_round ((__v8hf) __C,
+ (__v8hf) __A,
+ (__v8hf) __B,
+ __D, __E));
+}
+
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_maskz_fcmadd_round_sch (__mmask8 __A, __m128h __B, __m128h __C,
+ __m128h __D, const int __E)
+{
+ return (__m128h)__builtin_ia32_vfcmaddcsh_v8hf_maskz_round((__v8hf) __D,
+ (__v8hf) __B,
+ (__v8hf) __C,
+ __A, __E);
+}
+
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_fcmadd_round_sch (__m128h __A, __m128h __B, __m128h __C, const int __D)
+{
+ return (__m128h)__builtin_ia32_vfcmaddcsh_v8hf_round((__v8hf) __C,
+ (__v8hf) __A,
+ (__v8hf) __B,
+ __D);
+}
+
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_mask_fmadd_round_sch (__m128h __A, __mmask8 __B, __m128h __C,
+ __m128h __D, const int __E)
+{
+#ifdef __AVX512VL__
+ return (__m128h) __builtin_ia32_movaps128_mask (
+ (__v4sf)
+ __builtin_ia32_vfmaddcsh_v8hf_mask_round ((__v8hf) __D,
+ (__v8hf) __A,
+ (__v8hf) __C,
+ __B, __E),
+ (__v4sf) __A, __B);
+#else
+ return (__m128h) __builtin_ia32_blendvps ((__v4sf) __A,
+ (__v4sf)
+ __builtin_ia32_vfmaddcsh_v8hf_mask_round ((__v8hf) __D,
+ (__v8hf) __A,
+ (__v8hf) __C,
+ __B, __E),
+ (__v4sf) _mm_set_ss ((float) ((int) __B << 31)));
+#endif
+}
+
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_mask3_fmadd_round_sch (__m128h __A, __m128h __B, __m128h __C,
+ __mmask8 __D, const int __E)
+{
+ return (__m128h) _mm_move_ss ((__m128) __C,
+ (__m128)
+ __builtin_ia32_vfmaddcsh_v8hf_mask_round ((__v8hf) __C,
+ (__v8hf) __A,
+ (__v8hf) __B,
+ __D, __E));
+}
+
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_maskz_fmadd_round_sch (__mmask8 __A, __m128h __B, __m128h __C,
+ __m128h __D, const int __E)
+{
+ return (__m128h)__builtin_ia32_vfmaddcsh_v8hf_maskz_round((__v8hf) __D,
+ (__v8hf) __B,
+ (__v8hf) __C,
+ __A, __E);
+}
+
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_fmadd_round_sch (__m128h __A, __m128h __B, __m128h __C, const int __D)
+{
+ return (__m128h)__builtin_ia32_vfmaddcsh_v8hf_round((__v8hf) __C,
+ (__v8hf) __A,
+ (__v8hf) __B,
+ __D);
+}
+
+#else
+#ifdef __AVX512VL__
+#define _mm_mask_fcmadd_round_sch(A, B, C, D, E) \
+ ((__m128h) __builtin_ia32_movaps128_mask ( \
+ (__v4sf) \
+ __builtin_ia32_vfcmaddcsh_v8hf_mask_round ((__v8hf) (D), \
+ (__v8hf) (A), \
+ (__v8hf) (C), \
+ (B), (E)), \
+ (__v4sf) (A), (B)))
+
+#else
+#define _mm_mask_fcmadd_round_sch(A, B, C, D, E) \
+ ((__m128h) __builtin_ia32_blendvps ((__v4sf) (A), \
+ (__v4sf) \
+ __builtin_ia32_vfcmaddcsh_v8hf_mask_round ((__v8hf) (D), \
+ (__v8hf) (A), \
+ (__v8hf) (C), \
+ (B), (E)), \
+ (__v4sf) _mm_set_ss ((float) ((int) (B) << 31))))
+#endif
+
+#define _mm_mask3_fcmadd_round_sch(A, B, C, D, E) \
+ ((__m128h) _mm_move_ss ((__m128) (C), \
+ (__m128) \
+ __builtin_ia32_vfcmaddcsh_v8hf_mask_round ((__v8hf) (C), \
+ (__v8hf) (A), \
+ (__v8hf) (B), \
+ (D), (E))))
+
+#define _mm_maskz_fcmadd_round_sch(A, B, C, D, E) \
+ __builtin_ia32_vfcmaddcsh_v8hf_maskz_round ((D), (B), (C), (A), (E))
+
+#define _mm_fcmadd_round_sch(A, B, C, D) \
+ __builtin_ia32_vfcmaddcsh_v8hf_round ((C), (A), (B), (D))
+
+#ifdef __AVX512VL__
+#define _mm_mask_fmadd_round_sch(A, B, C, D, E) \
+ ((__m128h) __builtin_ia32_movaps128_mask ( \
+ (__v4sf) \
+ __builtin_ia32_vfmaddcsh_v8hf_mask_round ((__v8hf) (D), \
+ (__v8hf) (A), \
+ (__v8hf) (C), \
+ (B), (E)), \
+ (__v4sf) (A), (B)))
+
+#else
+#define _mm_mask_fmadd_round_sch(A, B, C, D, E) \
+ ((__m128h) __builtin_ia32_blendvps ((__v4sf) (A), \
+ (__v4sf) \
+ __builtin_ia32_vfmaddcsh_v8hf_mask_round ((__v8hf) (D), \
+ (__v8hf) (A), \
+ (__v8hf) (C), \
+ (B), (E)), \
+ (__v4sf) _mm_set_ss ((float) ((int) (B) << 31))))
+#endif
+
+#define _mm_mask3_fmadd_round_sch(A, B, C, D, E) \
+ ((__m128h) _mm_move_ss ((__m128) (C), \
+ (__m128) \
+ __builtin_ia32_vfmaddcsh_v8hf_mask_round ((__v8hf) (C), \
+ (__v8hf) (A), \
+ (__v8hf) (B), \
+ (D), (E))))
+
+#define _mm_maskz_fmadd_round_sch(A, B, C, D, E) \
+ __builtin_ia32_vfmaddcsh_v8hf_maskz_round ((D), (B), (C), (A), (E))
+
+#define _mm_fmadd_round_sch(A, B, C, D) \
+ __builtin_ia32_vfmaddcsh_v8hf_round ((C), (A), (B), (D))
+
+#endif /* __OPTIMIZE__ */
+
+/* Intrinsics vf[,c]mulcsh. */
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_fcmul_sch (__m128h __A, __m128h __B)
+{
+ return (__m128h)
+ __builtin_ia32_vfcmulcsh_v8hf_round((__v8hf) __A,
+ (__v8hf) __B,
+ _MM_FROUND_CUR_DIRECTION);
+}
+
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_mask_fcmul_sch (__m128h __A, __mmask8 __B, __m128h __C, __m128h __D)
+{
+ return (__m128h)
+ __builtin_ia32_vfcmulcsh_v8hf_mask_round((__v8hf) __C,
+ (__v8hf) __D,
+ (__v8hf) __A,
+ __B, _MM_FROUND_CUR_DIRECTION);
+}
+
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_maskz_fcmul_sch (__mmask8 __A, __m128h __B, __m128h __C)
+{
+ return (__m128h)
+ __builtin_ia32_vfcmulcsh_v8hf_mask_round((__v8hf) __B,
+ (__v8hf) __C,
+ _mm_setzero_ph (),
+ __A, _MM_FROUND_CUR_DIRECTION);
+}
+
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_fmul_sch (__m128h __A, __m128h __B)
+{
+ return (__m128h)
+ __builtin_ia32_vfmulcsh_v8hf_round((__v8hf) __A,
+ (__v8hf) __B,
+ _MM_FROUND_CUR_DIRECTION);
+}
+
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_mask_fmul_sch (__m128h __A, __mmask8 __B, __m128h __C, __m128h __D)
+{
+ return (__m128h)
+ __builtin_ia32_vfmulcsh_v8hf_mask_round((__v8hf) __C,
+ (__v8hf) __D,
+ (__v8hf) __A,
+ __B, _MM_FROUND_CUR_DIRECTION);
+}
+
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_maskz_fmul_sch (__mmask8 __A, __m128h __B, __m128h __C)
+{
+ return (__m128h)
+ __builtin_ia32_vfmulcsh_v8hf_mask_round((__v8hf) __B,
+ (__v8hf) __C,
+ _mm_setzero_ph (),
+ __A, _MM_FROUND_CUR_DIRECTION);
+}
+
+#ifdef __OPTIMIZE__
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_fcmul_round_sch (__m128h __A, __m128h __B, const int __D)
+{
+ return (__m128h)__builtin_ia32_vfcmulcsh_v8hf_round((__v8hf) __A,
+ (__v8hf) __B,
+ __D);
+}
+
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_mask_fcmul_round_sch (__m128h __A, __mmask8 __B, __m128h __C,
+ __m128h __D, const int __E)
+{
+ return (__m128h)__builtin_ia32_vfcmulcsh_v8hf_mask_round((__v8hf) __C,
+ (__v8hf) __D,
+ (__v8hf) __A,
+ __B, __E);
+}
+
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_maskz_fcmul_round_sch (__mmask8 __A, __m128h __B, __m128h __C,
+ const int __E)
+{
+ return (__m128h)__builtin_ia32_vfcmulcsh_v8hf_mask_round((__v8hf) __B,
+ (__v8hf) __C,
+ _mm_setzero_ph (),
+ __A, __E);
+}
+
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_fmul_round_sch (__m128h __A, __m128h __B, const int __D)
+{
+ return (__m128h)__builtin_ia32_vfmulcsh_v8hf_round((__v8hf) __A,
+ (__v8hf) __B, __D);
+}
+
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_mask_fmul_round_sch (__m128h __A, __mmask8 __B, __m128h __C,
+ __m128h __D, const int __E)
+{
+ return (__m128h)__builtin_ia32_vfmulcsh_v8hf_mask_round((__v8hf) __C,
+ (__v8hf) __D,
+ (__v8hf) __A,
+ __B, __E);
+}
+
+extern __inline __m128h
+__attribute__ ((__gnu_inline__, __always_inline__, __artificial__))
+_mm_maskz_fmul_round_sch (__mmask8 __A, __m128h __B, __m128h __C, const int __E)
+{
+ return (__m128h)__builtin_ia32_vfmulcsh_v8hf_mask_round((__v8hf) __B,
+ (__v8hf) __C,
+ _mm_setzero_ph (),
+ __A, __E);
+}
+
+#else
+#define _mm_fcmul_round_sch(__A, __B, __D) \
+ (__m128h)__builtin_ia32_vfcmulcsh_v8hf_round((__v8hf) __A,(__v8hf) __B, __D)
+
+#define _mm_mask_fcmul_round_sch(__A, __B, __C, __D, __E) \
+ (__m128h)__builtin_ia32_vfcmulcsh_v8hf_mask_round((__v8hf) __C, \
+ (__v8hf) __D, \
+ (__v8hf) __A, \
+ __B, __E)
+
+#define _mm_maskz_fcmul_round_sch(__A, __B, __C, __E) \
+ (__m128h)__builtin_ia32_vfcmulcsh_v8hf_mask_round((__v8hf) __B, \
+ (__v8hf) __C, \
+ _mm_setzero_ph(), \
+ __A, __E)
+
+#define _mm_fmul_round_sch(__A, __B, __D) \
+ (__m128h)__builtin_ia32_vfmulcsh_v8hf_round((__v8hf) __A,(__v8hf) __B, __D)
+
+#define _mm_mask_fmul_round_sch(__A, __B, __C, __D, __E) \
+ (__m128h)__builtin_ia32_vfmulcsh_v8hf_mask_round((__v8hf) __C, \
+ (__v8hf) __D, \
+ (__v8hf) __A, \
+ __B, __E)
+
+#define _mm_maskz_fmul_round_sch(__A, __B, __C, __E) \
+ (__m128h)__builtin_ia32_vfmulcsh_v8hf_mask_round((__v8hf) __B, \
+ (__v8hf) __C, \
+ _mm_setzero_ph (), \
+ __A, __E)
+
+#endif /* __OPTIMIZE__ */
+
#ifdef __DISABLE_AVX512FP16__
#undef __DISABLE_AVX512FP16__
#pragma GCC pop_options
diff --git a/gcc/config/i386/i386-builtin.def b/gcc/config/i386/i386-builtin.def
index 448f9f75fa4..8d57413153f 100644
--- a/gcc/config/i386/i386-builtin.def
+++ b/gcc/config/i386/i386-builtin.def
@@ -3231,6 +3231,16 @@ BDESC (0, OPTION_MASK_ISA2_AVX512FP16, CODE_FOR_avx512bw_fcmulc_v32hf_round, "__
BDESC (0, OPTION_MASK_ISA2_AVX512FP16, CODE_FOR_avx512bw_fcmulc_v32hf_mask_round, "__builtin_ia32_vfcmulcph_v32hf_mask_round", IX86_BUILTIN_VFCMULCPH_V32HF_MASK_ROUND, UNKNOWN, (int) V32HF_FTYPE_V32HF_V32HF_V32HF_UHI_INT)
BDESC (0, OPTION_MASK_ISA2_AVX512FP16, CODE_FOR_avx512bw_fmulc_v32hf_round, "__builtin_ia32_vfmulcph_v32hf_round", IX86_BUILTIN_VFMULCPH_V32HF_ROUND, UNKNOWN, (int) V32HF_FTYPE_V32HF_V32HF_INT)
BDESC (0, OPTION_MASK_ISA2_AVX512FP16, CODE_FOR_avx512bw_fmulc_v32hf_mask_round, "__builtin_ia32_vfmulcph_v32hf_mask_round", IX86_BUILTIN_VFMULCPH_V32HF_MASK_ROUND, UNKNOWN, (int) V32HF_FTYPE_V32HF_V32HF_V32HF_UHI_INT)
+BDESC (0, OPTION_MASK_ISA2_AVX512FP16, CODE_FOR_avx512fp16_fma_fcmaddcsh_v8hf_round, "__builtin_ia32_vfcmaddcsh_v8hf_round", IX86_BUILTIN_VFCMADDCSH_V8HF_ROUND, UNKNOWN, (int) V8HF_FTYPE_V8HF_V8HF_V8HF_INT)
+BDESC (0, OPTION_MASK_ISA2_AVX512FP16, CODE_FOR_avx512fp16_fcmaddcsh_v8hf_mask_round, "__builtin_ia32_vfcmaddcsh_v8hf_mask_round", IX86_BUILTIN_VFCMADDCSH_V8HF_MASK_ROUND, UNKNOWN, (int) V8HF_FTYPE_V8HF_V8HF_V8HF_UQI_INT)
+BDESC (0, OPTION_MASK_ISA2_AVX512FP16, CODE_FOR_avx512fp16_fcmaddcsh_v8hf_maskz_round, "__builtin_ia32_vfcmaddcsh_v8hf_maskz_round", IX86_BUILTIN_VFCMADDCSH_V8HF_MASKZ_ROUND, UNKNOWN, (int) V8HF_FTYPE_V8HF_V8HF_V8HF_UQI_INT)
+BDESC (0, OPTION_MASK_ISA2_AVX512FP16, CODE_FOR_avx512fp16_fma_fmaddcsh_v8hf_round, "__builtin_ia32_vfmaddcsh_v8hf_round", IX86_BUILTIN_VFMADDCSH_V8HF_ROUND, UNKNOWN, (int) V8HF_FTYPE_V8HF_V8HF_V8HF_INT)
+BDESC (0, OPTION_MASK_ISA2_AVX512FP16, CODE_FOR_avx512fp16_fmaddcsh_v8hf_mask_round, "__builtin_ia32_vfmaddcsh_v8hf_mask_round", IX86_BUILTIN_VFMADDCSH_V8HF_MASK_ROUND, UNKNOWN, (int) V8HF_FTYPE_V8HF_V8HF_V8HF_UQI_INT)
+BDESC (0, OPTION_MASK_ISA2_AVX512FP16, CODE_FOR_avx512fp16_fmaddcsh_v8hf_maskz_round, "__builtin_ia32_vfmaddcsh_v8hf_maskz_round", IX86_BUILTIN_VFMADDCSH_V8HF_MASKZ_ROUND, UNKNOWN, (int) V8HF_FTYPE_V8HF_V8HF_V8HF_UQI_INT)
+BDESC (0, OPTION_MASK_ISA2_AVX512FP16, CODE_FOR_avx512fp16_fcmulcsh_v8hf_round, "__builtin_ia32_vfcmulcsh_v8hf_round", IX86_BUILTIN_VFCMULCSH_V8HF_ROUND, UNKNOWN, (int) V8HF_FTYPE_V8HF_V8HF_INT)
+BDESC (0, OPTION_MASK_ISA2_AVX512FP16, CODE_FOR_avx512fp16_fcmulcsh_v8hf_mask_round, "__builtin_ia32_vfcmulcsh_v8hf_mask_round", IX86_BUILTIN_VFCMULCSH_V8HF_MASK_ROUND, UNKNOWN, (int) V8HF_FTYPE_V8HF_V8HF_V8HF_UQI_INT)
+BDESC (0, OPTION_MASK_ISA2_AVX512FP16, CODE_FOR_avx512fp16_fmulcsh_v8hf_round, "__builtin_ia32_vfmulcsh_v8hf_round", IX86_BUILTIN_VFMULCSH_V8HF_ROUND, UNKNOWN, (int) V8HF_FTYPE_V8HF_V8HF_INT)
+BDESC (0, OPTION_MASK_ISA2_AVX512FP16, CODE_FOR_avx512fp16_fmulcsh_v8hf_mask_round, "__builtin_ia32_vfmulcsh_v8hf_mask_round", IX86_BUILTIN_VFMULCSH_V8HF_MASK_ROUND, UNKNOWN, (int) V8HF_FTYPE_V8HF_V8HF_V8HF_UQI_INT)
BDESC_END (ROUND_ARGS, MULTI_ARG)
diff --git a/gcc/config/i386/sse.md b/gcc/config/i386/sse.md
index ddd93f739e3..2c3dba5bdb0 100644
--- a/gcc/config/i386/sse.md
+++ b/gcc/config/i386/sse.md
@@ -5597,6 +5597,82 @@ (define_insn "<avx512>_<complexopname>_<mode><maskc_name><round_name>"
[(set_attr "type" "ssemul")
(set_attr "mode" "<MODE>")])
+(define_expand "avx512fp16_fmaddcsh_v8hf_maskz<round_expand_name>"
+ [(match_operand:V8HF 0 "register_operand")
+ (match_operand:V8HF 1 "<round_expand_nimm_predicate>")
+ (match_operand:V8HF 2 "<round_expand_nimm_predicate>")
+ (match_operand:V8HF 3 "<round_expand_nimm_predicate>")
+ (match_operand:QI 4 "register_operand")]
+ "TARGET_AVX512FP16 && <round_mode512bit_condition>"
+{
+ emit_insn (gen_avx512fp16_fma_fmaddcsh_v8hf_maskz<round_expand_name> (
+ operands[0], operands[1], operands[2], operands[3],
+ CONST0_RTX (V8HFmode), operands[4]<round_expand_operand>));
+ DONE;
+})
+
+(define_expand "avx512fp16_fcmaddcsh_v8hf_maskz<round_expand_name>"
+ [(match_operand:V8HF 0 "register_operand")
+ (match_operand:V8HF 1 "<round_expand_nimm_predicate>")
+ (match_operand:V8HF 2 "<round_expand_nimm_predicate>")
+ (match_operand:V8HF 3 "<round_expand_nimm_predicate>")
+ (match_operand:QI 4 "register_operand")]
+ "TARGET_AVX512FP16 && <round_mode512bit_condition>"
+{
+ emit_insn (gen_avx512fp16_fma_fcmaddcsh_v8hf_maskz<round_expand_name> (
+ operands[0], operands[1], operands[2], operands[3],
+ CONST0_RTX (V8HFmode), operands[4]<round_expand_operand>));
+ DONE;
+})
+
+(define_insn "avx512fp16_fma_<complexopname>sh_v8hf<mask_scalarcz_name><round_scalarcz_name>"
+ [(set (match_operand:V8HF 0 "register_operand" "=v")
+ (vec_merge:V8HF
+ (unspec:V8HF
+ [(match_operand:V8HF 1 "<round_scalarcz_nimm_predicate>" "0")
+ (match_operand:V8HF 2 "<round_scalarcz_nimm_predicate>" "v")
+ (match_operand:V8HF 3 "<round_scalarcz_nimm_predicate>" "<round_scalarcz_constraint>")]
+ UNSPEC_COMPLEX_F_C_MA)
+ (match_dup 2)
+ (const_int 3)))]
+ "TARGET_AVX512FP16"
+ "v<complexopname>sh\t{<round_scalarcz_mask_op4>%3, %2, %0<mask_scalarcz_operand4>|%0<mask_scalarcz_operand4>, %2, %3<round_scalarcz_maskcz_mask_op4>}"
+ [(set_attr "type" "ssemuladd")
+ (set_attr "mode" "V8HF")])
+
+(define_insn "avx512fp16_<complexopname>sh_v8hf_mask<round_name>"
+ [(set (match_operand:V8HF 0 "register_operand" "=v")
+ (vec_merge:V8HF
+ (vec_merge:V8HF
+ (unspec:V8HF
+ [(match_operand:V8HF 1 "<round_nimm_predicate>" "0")
+ (match_operand:V8HF 2 "<round_nimm_predicate>" "v")
+ (match_operand:V8HF 3 "<round_nimm_predicate>" "<round_constraint>")]
+ UNSPEC_COMPLEX_F_C_MA)
+ (match_dup 1)
+ (unspec:QI [(match_operand:QI 4 "register_operand" "Yk")]
+ UNSPEC_COMPLEX_MASK))
+ (match_dup 2)
+ (const_int 3)))]
+ "TARGET_AVX512FP16"
+ "v<complexopname>sh\t{<round_op5>%3, %2, %0%{%4%}|%0%{%4%}, %2, %3<round_op5>}"
+ [(set_attr "type" "ssemuladd")
+ (set_attr "mode" "V8HF")])
+
+(define_insn "avx512fp16_<complexopname>sh_v8hf<mask_scalarc_name><round_scalarcz_name>"
+ [(set (match_operand:V8HF 0 "register_operand" "=v")
+ (vec_merge:V8HF
+ (unspec:V8HF
+ [(match_operand:V8HF 1 "nonimmediate_operand" "v")
+ (match_operand:V8HF 2 "<round_scalarcz_nimm_predicate>" "<round_scalarcz_constraint>")]
+ UNSPEC_COMPLEX_F_C_MUL)
+ (match_dup 1)
+ (const_int 3)))]
+ "TARGET_AVX512FP16"
+ "v<complexopname>sh\t{<round_scalarc_mask_op3>%2, %1, %0<mask_scalarc_operand3>|%0<mask_scalarc_operand3>, %1, %2<round_scalarc_mask_op3>}"
+ [(set_attr "type" "ssemul")
+ (set_attr "mode" "V8HF")])
+
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;;
;; Parallel half-precision floating point conversion operations
diff --git a/gcc/config/i386/subst.md b/gcc/config/i386/subst.md
index 3a1f554e9b9..5b14a632111 100644
--- a/gcc/config/i386/subst.md
+++ b/gcc/config/i386/subst.md
@@ -308,8 +308,12 @@ (define_subst "mask_expand4"
(match_operand:<avx512fmaskmode> 5 "register_operand")])
(define_subst_attr "mask_scalar_name" "mask_scalar" "" "_mask")
+(define_subst_attr "mask_scalarcz_name" "mask_scalarcz" "" "_maskz")
+(define_subst_attr "mask_scalarc_name" "mask_scalarc" "" "_mask")
+(define_subst_attr "mask_scalarc_operand3" "mask_scalarc" "" "%{%4%}%N3")
(define_subst_attr "mask_scalar_operand3" "mask_scalar" "" "%{%4%}%N3")
(define_subst_attr "mask_scalar_operand4" "mask_scalar" "" "%{%5%}%N4")
+(define_subst_attr "mask_scalarcz_operand4" "mask_scalarcz" "" "%{%5%}%N4")
(define_subst "mask_scalar"
[(set (match_operand:SUBST_V 0)
@@ -327,12 +331,55 @@ (define_subst "mask_scalar"
(match_dup 2)
(const_int 1)))])
+(define_subst "mask_scalarcz"
+ [(set (match_operand:SUBST_CV 0)
+ (vec_merge:SUBST_CV
+ (match_operand:SUBST_CV 1)
+ (match_operand:SUBST_CV 2)
+ (const_int 3)))]
+ "TARGET_AVX512F"
+ [(set (match_dup 0)
+ (vec_merge:SUBST_CV
+ (vec_merge:SUBST_CV
+ (match_dup 1)
+ (match_operand:SUBST_CV 3 "const0_operand" "C")
+ (unspec:<avx512fmaskmode>
+ [(match_operand:<avx512fmaskcmode> 4 "register_operand" "Yk")]
+ UNSPEC_COMPLEX_MASK))
+ (match_dup 2)
+ (const_int 3)))])
+
+(define_subst "mask_scalarc"
+ [(set (match_operand:SUBST_CV 0)
+ (vec_merge:SUBST_CV
+ (match_operand:SUBST_CV 1)
+ (match_operand:SUBST_CV 2)
+ (const_int 3)))]
+ "TARGET_AVX512F"
+ [(set (match_dup 0)
+ (vec_merge:SUBST_CV
+ (vec_merge:SUBST_CV
+ (match_dup 1)
+ (match_operand:SUBST_CV 3 "nonimm_or_0_operand" "0C")
+ (unspec:<avx512fmaskmode>
+ [(match_operand:<avx512fmaskcmode> 4 "register_operand" "Yk")]
+ UNSPEC_COMPLEX_MASK))
+ (match_dup 2)
+ (const_int 3)))])
+
(define_subst_attr "round_scalar_name" "round_scalar" "" "_round")
+(define_subst_attr "round_scalarcz_name" "round_scalarcz" "" "_round")
(define_subst_attr "round_scalar_mask_operand3" "mask_scalar" "%R3" "%R5")
+(define_subst_attr "round_scalarc_mask_operand3" "mask_scalarc" "%R3" "%R5")
+(define_subst_attr "round_scalarcz_mask_operand4" "mask_scalarcz" "%R4" "%R6")
(define_subst_attr "round_scalar_mask_op3" "round_scalar" "" "<round_scalar_mask_operand3>")
+(define_subst_attr "round_scalarc_mask_op3" "round_scalarcz" "" "<round_scalarc_mask_operand3>")
+(define_subst_attr "round_scalarcz_mask_op4" "round_scalarcz" "" "<round_scalarcz_mask_operand4>")
(define_subst_attr "round_scalar_constraint" "round_scalar" "vm" "v")
+(define_subst_attr "round_scalarcz_constraint" "round_scalarcz" "vm" "v")
(define_subst_attr "round_scalar_prefix" "round_scalar" "vex" "evex")
(define_subst_attr "round_scalar_nimm_predicate" "round_scalar" "nonimmediate_operand" "register_operand")
+(define_subst_attr "round_scalarcz_nimm_predicate" "round_scalarcz" "vector_operand" "register_operand")
(define_subst "round_scalar"
[(set (match_operand:SUBST_V 0)
@@ -350,6 +397,22 @@ (define_subst "round_scalar"
(match_operand:SI 3 "const_4_or_8_to_11_operand")]
UNSPEC_EMBEDDED_ROUNDING))])
+(define_subst "round_scalarcz"
+ [(set (match_operand:SUBST_V 0)
+ (vec_merge:SUBST_V
+ (match_operand:SUBST_V 1)
+ (match_operand:SUBST_V 2)
+ (const_int 3)))]
+ "TARGET_AVX512F"
+ [(set (match_dup 0)
+ (unspec:SUBST_V [
+ (vec_merge:SUBST_V
+ (match_dup 1)
+ (match_dup 2)
+ (const_int 3))
+ (match_operand:SI 3 "const_4_or_8_to_11_operand")]
+ UNSPEC_EMBEDDED_ROUNDING))])
+
(define_subst_attr "round_saeonly_scalar_name" "round_saeonly_scalar" "" "_round")
(define_subst_attr "round_saeonly_scalar_mask_operand3" "mask_scalar" "%r3" "%r5")
(define_subst_attr "round_saeonly_scalar_mask_operand4" "mask_scalar" "%r4" "%r6")
diff --git a/gcc/testsuite/gcc.target/i386/avx-1.c b/gcc/testsuite/gcc.target/i386/avx-1.c
index 56e90d9f9a5..69de37a0087 100644
--- a/gcc/testsuite/gcc.target/i386/avx-1.c
+++ b/gcc/testsuite/gcc.target/i386/avx-1.c
@@ -797,6 +797,16 @@
#define __builtin_ia32_vfmulcph_v32hf_mask_round(A, C, D, B, E) __builtin_ia32_vfmulcph_v32hf_mask_round(A, C, D, B, 8)
#define __builtin_ia32_vfcmulcph_v32hf_round(A, B, C) __builtin_ia32_vfcmulcph_v32hf_round(A, B, 8)
#define __builtin_ia32_vfcmulcph_v32hf_mask_round(A, C, D, B, E) __builtin_ia32_vfcmulcph_v32hf_mask_round(A, C, D, B, 8)
+#define __builtin_ia32_vfmaddcsh_v8hf_round(A, B, C, D) __builtin_ia32_vfmaddcsh_v8hf_round(A, B, C, 8)
+#define __builtin_ia32_vfmaddcsh_v8hf_mask_round(A, C, D, B, E) __builtin_ia32_vfmaddcsh_v8hf_mask_round(A, C, D, B, 8)
+#define __builtin_ia32_vfmaddcsh_v8hf_maskz_round(B, C, D, A, E) __builtin_ia32_vfmaddcsh_v8hf_maskz_round(B, C, D, A, 8)
+#define __builtin_ia32_vfcmaddcsh_v8hf_round(A, B, C, D) __builtin_ia32_vfcmaddcsh_v8hf_round(A, B, C, 8)
+#define __builtin_ia32_vfcmaddcsh_v8hf_mask_round(A, C, D, B, E) __builtin_ia32_vfcmaddcsh_v8hf_mask_round(A, C, D, B, 8)
+#define __builtin_ia32_vfcmaddcsh_v8hf_maskz_round(B, C, D, A, E) __builtin_ia32_vfcmaddcsh_v8hf_maskz_round(B, C, D, A, 8)
+#define __builtin_ia32_vfmulcsh_v8hf_round(A, B, C) __builtin_ia32_vfmulcsh_v8hf_round(A, B, 8)
+#define __builtin_ia32_vfmulcsh_v8hf_mask_round(A, C, D, B, E) __builtin_ia32_vfmulcsh_v8hf_mask_round(A, C, D, B, 8)
+#define __builtin_ia32_vfcmulcsh_v8hf_round(A, B, C) __builtin_ia32_vfcmulcsh_v8hf_round(A, B, 8)
+#define __builtin_ia32_vfcmulcsh_v8hf_mask_round(A, C, D, B, E) __builtin_ia32_vfcmulcsh_v8hf_mask_round(A, C, D, B, 8)
/* avx512fp16vlintrin.h */
#define __builtin_ia32_vcmpph_v8hf_mask(A, B, C, D) __builtin_ia32_vcmpph_v8hf_mask(A, B, 1, D)
diff --git a/gcc/testsuite/gcc.target/i386/sse-13.c b/gcc/testsuite/gcc.target/i386/sse-13.c
index ef9f8aad853..60adfcc1c67 100644
--- a/gcc/testsuite/gcc.target/i386/sse-13.c
+++ b/gcc/testsuite/gcc.target/i386/sse-13.c
@@ -814,6 +814,16 @@
#define __builtin_ia32_vfmulcph_v32hf_mask_round(A, C, D, B, E) __builtin_ia32_vfmulcph_v32hf_mask_round(A, C, D, B, 8)
#define __builtin_ia32_vfcmulcph_v32hf_round(A, B, C) __builtin_ia32_vfcmulcph_v32hf_round(A, B, 8)
#define __builtin_ia32_vfcmulcph_v32hf_mask_round(A, C, D, B, E) __builtin_ia32_vfcmulcph_v32hf_mask_round(A, C, D, B, 8)
+#define __builtin_ia32_vfmaddcsh_v8hf_round(A, B, C, D) __builtin_ia32_vfmaddcsh_v8hf_round(A, B, C, 8)
+#define __builtin_ia32_vfmaddcsh_v8hf_mask_round(A, C, D, B, E) __builtin_ia32_vfmaddcsh_v8hf_mask_round(A, C, D, B, 8)
+#define __builtin_ia32_vfmaddcsh_v8hf_maskz_round(B, C, D, A, E) __builtin_ia32_vfmaddcsh_v8hf_maskz_round(B, C, D, A, 8)
+#define __builtin_ia32_vfcmaddcsh_v8hf_round(A, B, C, D) __builtin_ia32_vfcmaddcsh_v8hf_round(A, B, C, 8)
+#define __builtin_ia32_vfcmaddcsh_v8hf_mask_round(A, C, D, B, E) __builtin_ia32_vfcmaddcsh_v8hf_mask_round(A, C, D, B, 8)
+#define __builtin_ia32_vfcmaddcsh_v8hf_maskz_round(B, C, D, A, E) __builtin_ia32_vfcmaddcsh_v8hf_maskz_round(B, C, D, A, 8)
+#define __builtin_ia32_vfmulcsh_v8hf_round(A, B, C) __builtin_ia32_vfmulcsh_v8hf_round(A, B, 8)
+#define __builtin_ia32_vfmulcsh_v8hf_mask_round(A, C, D, B, E) __builtin_ia32_vfmulcsh_v8hf_mask_round(A, C, D, B, 8)
+#define __builtin_ia32_vfcmulcsh_v8hf_round(A, B, C) __builtin_ia32_vfcmulcsh_v8hf_round(A, B, 8)
+#define __builtin_ia32_vfcmulcsh_v8hf_mask_round(A, C, D, B, E) __builtin_ia32_vfcmulcsh_v8hf_mask_round(A, C, D, B, 8)
/* avx512fp16vlintrin.h */
#define __builtin_ia32_vcmpph_v8hf_mask(A, B, C, D) __builtin_ia32_vcmpph_v8hf_mask(A, B, 1, D)
diff --git a/gcc/testsuite/gcc.target/i386/sse-14.c b/gcc/testsuite/gcc.target/i386/sse-14.c
index f27c73fd4cc..956a9d16f84 100644
--- a/gcc/testsuite/gcc.target/i386/sse-14.c
+++ b/gcc/testsuite/gcc.target/i386/sse-14.c
@@ -774,6 +774,8 @@ test_2 (_mm_cvt_roundi32_sh, __m128h, __m128h, int, 8)
test_2 (_mm_cvt_roundu32_sh, __m128h, __m128h, unsigned, 8)
test_2 (_mm512_fmul_round_pch, __m512h, __m512h, __m512h, 8)
test_2 (_mm512_fcmul_round_pch, __m512h, __m512h, __m512h, 8)
+test_2 (_mm_fmul_round_sch, __m128h, __m128h, __m128h, 8)
+test_2 (_mm_fcmul_round_sch, __m128h, __m128h, __m128h, 8)
test_2x (_mm512_cmp_round_ph_mask, __mmask32, __m512h, __m512h, 1, 8)
test_2x (_mm_cmp_round_sh_mask, __mmask8, __m128h, __m128h, 1, 8)
test_2x (_mm_comi_round_sh, int, __m128h, __m128h, 1, 8)
@@ -850,8 +852,12 @@ test_3 (_mm_fmsub_round_sh, __m128h, __m128h, __m128h, __m128h, 9)
test_3 (_mm_fnmsub_round_sh, __m128h, __m128h, __m128h, __m128h, 9)
test_3 (_mm512_fmadd_round_pch, __m512h, __m512h, __m512h, __m512h, 8)
test_3 (_mm512_fcmadd_round_pch, __m512h, __m512h, __m512h, __m512h, 8)
+test_3 (_mm_fmadd_round_sch, __m128h, __m128h, __m128h, __m128h, 8)
+test_3 (_mm_fcmadd_round_sch, __m128h, __m128h, __m128h, __m128h, 8)
test_3 (_mm512_maskz_fmul_round_pch, __m512h, __mmask16, __m512h, __m512h, 8)
test_3 (_mm512_maskz_fcmul_round_pch, __m512h, __mmask16, __m512h, __m512h, 8)
+test_3 (_mm_maskz_fmul_round_sch, __m128h, __mmask8, __m128h, __m128h, 8)
+test_3 (_mm_maskz_fcmul_round_sch, __m128h, __mmask8, __m128h, __m128h, 8)
test_3x (_mm512_mask_cmp_round_ph_mask, __mmask32, __mmask32, __m512h, __m512h, 1, 8)
test_3x (_mm_mask_cmp_round_sh_mask, __mmask8, __mmask8, __m128h, __m128h, 1, 8)
test_3x (_mm512_mask_reduce_round_ph, __m512h, __m512h, __mmask32, __m512h, 123, 8)
@@ -920,8 +926,16 @@ test_4 (_mm512_mask3_fmadd_round_pch, __m512h, __m512h, __m512h, __m512h, __mmas
test_4 (_mm512_mask3_fcmadd_round_pch, __m512h, __m512h, __m512h, __m512h, __mmask16, 8)
test_4 (_mm512_maskz_fmadd_round_pch, __m512h, __mmask16, __m512h, __m512h, __m512h, 8)
test_4 (_mm512_maskz_fcmadd_round_pch, __m512h, __mmask16, __m512h, __m512h, __m512h, 8)
+test_4 (_mm_mask_fmadd_round_sch, __m128h, __m128h, __mmask8, __m128h, __m128h, 8)
+test_4 (_mm_mask_fcmadd_round_sch, __m128h, __m128h, __mmask8, __m128h, __m128h, 8)
+test_4 (_mm_mask3_fmadd_round_sch, __m128h, __m128h, __m128h, __m128h, __mmask8, 8)
+test_4 (_mm_mask3_fcmadd_round_sch, __m128h, __m128h, __m128h, __m128h, __mmask8, 8)
+test_4 (_mm_maskz_fmadd_round_sch, __m128h, __mmask8, __m128h, __m128h, __m128h, 8)
+test_4 (_mm_maskz_fcmadd_round_sch, __m128h, __mmask8, __m128h, __m128h, __m128h, 8)
test_4 (_mm512_mask_fmul_round_pch, __m512h, __m512h, __mmask16, __m512h, __m512h, 8)
test_4 (_mm512_mask_fcmul_round_pch, __m512h, __m512h, __mmask16, __m512h, __m512h, 8)
+test_4 (_mm_mask_fmul_round_sch, __m128h, __m128h, __mmask8, __m128h, __m128h, 8)
+test_4 (_mm_mask_fcmul_round_sch, __m128h, __m128h, __mmask8, __m128h, __m128h, 8)
test_4x (_mm_mask_reduce_round_sh, __m128h, __m128h, __mmask8, __m128h, __m128h, 123, 8)
test_4x (_mm_mask_roundscale_round_sh, __m128h, __m128h, __mmask8, __m128h, __m128h, 123, 8)
test_4x (_mm_mask_getmant_sh, __m128h, __m128h, __mmask8, __m128h, __m128h, 1, 1)
diff --git a/gcc/testsuite/gcc.target/i386/sse-22.c b/gcc/testsuite/gcc.target/i386/sse-22.c
index ccf8c3a6c03..31492ef3697 100644
--- a/gcc/testsuite/gcc.target/i386/sse-22.c
+++ b/gcc/testsuite/gcc.target/i386/sse-22.c
@@ -878,6 +878,8 @@ test_2 (_mm_cvt_roundss_sh, __m128h, __m128h, __m128, 8)
test_2 (_mm_cvt_roundsd_sh, __m128h, __m128h, __m128d, 8)
test_2 (_mm512_fmul_round_pch, __m512h, __m512h, __m512h, 8)
test_2 (_mm512_fcmul_round_pch, __m512h, __m512h, __m512h, 8)
+test_2 (_mm_fmul_round_sch, __m128h, __m128h, __m128h, 8)
+test_2 (_mm_fcmul_round_sch, __m128h, __m128h, __m128h, 8)
test_2x (_mm512_cmp_round_ph_mask, __mmask32, __m512h, __m512h, 1, 8)
test_2x (_mm_cmp_round_sh_mask, __mmask8, __m128h, __m128h, 1, 8)
test_2x (_mm_comi_round_sh, int, __m128h, __m128h, 1, 8)
@@ -954,6 +956,10 @@ test_3 (_mm_fnmsub_round_sh, __m128h, __m128h, __m128h, __m128h, 9)
test_3 (_mm512_fmadd_round_pch, __m512h, __m512h, __m512h, __m512h, 8)
test_3 (_mm512_fcmadd_round_pch, __m512h, __m512h, __m512h, __m512h, 8)
test_3 (_mm512_maskz_fmul_round_pch, __m512h, __mmask16, __m512h, __m512h, 8)
+test_3 (_mm_maskz_fmul_round_sch, __m128h, __mmask8, __m128h, __m128h, 8)
+test_3 (_mm_maskz_fcmul_round_sch, __m128h, __mmask8, __m128h, __m128h, 8)
+test_3 (_mm_fmadd_round_sch, __m128h, __m128h, __m128h, __m128h, 8)
+test_3 (_mm_fcmadd_round_sch, __m128h, __m128h, __m128h, __m128h, 8)
test_3 (_mm512_maskz_fcmul_round_pch, __m512h, __mmask16, __m512h, __m512h, 8)
test_3x (_mm512_mask_cmp_round_ph_mask, __mmask32, __mmask32, __m512h, __m512h, 1, 8)
test_3x (_mm_mask_cmp_round_sh_mask, __mmask8, __mmask8, __m128h, __m128h, 1, 8)
@@ -1022,8 +1028,16 @@ test_4 (_mm512_mask3_fmadd_round_pch, __m512h, __m512h, __m512h, __m512h, __mmas
test_4 (_mm512_mask3_fcmadd_round_pch, __m512h, __m512h, __m512h, __m512h, __mmask16, 8)
test_4 (_mm512_maskz_fmadd_round_pch, __m512h, __mmask16, __m512h, __m512h, __m512h, 8)
test_4 (_mm512_maskz_fcmadd_round_pch, __m512h, __mmask16, __m512h, __m512h, __m512h, 8)
+test_4 (_mm_mask_fmadd_round_sch, __m128h, __m128h, __mmask8, __m128h, __m128h, 8)
+test_4 (_mm_mask_fcmadd_round_sch, __m128h, __m128h, __mmask8, __m128h, __m128h, 8)
+test_4 (_mm_mask3_fmadd_round_sch, __m128h, __m128h, __m128h, __m128h, __mmask8, 8)
+test_4 (_mm_mask3_fcmadd_round_sch, __m128h, __m128h, __m128h, __m128h, __mmask8, 8)
+test_4 (_mm_maskz_fmadd_round_sch, __m128h, __mmask8, __m128h, __m128h, __m128h, 8)
+test_4 (_mm_maskz_fcmadd_round_sch, __m128h, __mmask8, __m128h, __m128h, __m128h, 8)
test_4 (_mm512_mask_fmul_round_pch, __m512h, __m512h, __mmask16, __m512h, __m512h, 8)
test_4 (_mm512_mask_fcmul_round_pch, __m512h, __m512h, __mmask16, __m512h, __m512h, 8)
+test_4 (_mm_mask_fmul_round_sch, __m128h, __m128h, __mmask8, __m128h, __m128h, 8)
+test_4 (_mm_mask_fcmul_round_sch, __m128h, __m128h, __mmask8, __m128h, __m128h, 8)
test_4x (_mm_mask_reduce_round_sh, __m128h, __m128h, __mmask8, __m128h, __m128h, 123, 8)
test_4x (_mm_mask_roundscale_round_sh, __m128h, __m128h, __mmask8, __m128h, __m128h, 123, 8)
test_4x (_mm_mask_getmant_sh, __m128h, __m128h, __mmask8, __m128h, __m128h, 1, 1)
diff --git a/gcc/testsuite/gcc.target/i386/sse-23.c b/gcc/testsuite/gcc.target/i386/sse-23.c
index dc39d7e2012..4a110e86855 100644
--- a/gcc/testsuite/gcc.target/i386/sse-23.c
+++ b/gcc/testsuite/gcc.target/i386/sse-23.c
@@ -815,6 +815,16 @@
#define __builtin_ia32_vfmulcph_v32hf_mask_round(A, C, D, B, E) __builtin_ia32_vfmulcph_v32hf_mask_round(A, C, D, B, 8)
#define __builtin_ia32_vfcmulcph_v32hf_round(A, B, C) __builtin_ia32_vfcmulcph_v32hf_round(A, B, 8)
#define __builtin_ia32_vfcmulcph_v32hf_mask_round(A, C, D, B, E) __builtin_ia32_vfcmulcph_v32hf_mask_round(A, C, D, B, 8)
+#define __builtin_ia32_vfmaddcsh_v8hf_round(A, B, C, D) __builtin_ia32_vfmaddcsh_v8hf_round(A, B, C, 8)
+#define __builtin_ia32_vfmaddcsh_v8hf_mask_round(A, C, D, B, E) __builtin_ia32_vfmaddcsh_v8hf_mask_round(A, C, D, B, 8)
+#define __builtin_ia32_vfmaddcsh_v8hf_maskz_round(B, C, D, A, E) __builtin_ia32_vfmaddcsh_v8hf_maskz_round(B, C, D, A, 8)
+#define __builtin_ia32_vfcmaddcsh_v8hf_round(A, B, C, D) __builtin_ia32_vfcmaddcsh_v8hf_round(A, B, C, 8)
+#define __builtin_ia32_vfcmaddcsh_v8hf_mask_round(A, C, D, B, E) __builtin_ia32_vfcmaddcsh_v8hf_mask_round(A, C, D, B, 8)
+#define __builtin_ia32_vfcmaddcsh_v8hf_maskz_round(B, C, D, A, E) __builtin_ia32_vfcmaddcsh_v8hf_maskz_round(B, C, D, A, 8)
+#define __builtin_ia32_vfmulcsh_v8hf_round(A, B, C) __builtin_ia32_vfmulcsh_v8hf_round(A, B, 8)
+#define __builtin_ia32_vfmulcsh_v8hf_mask_round(A, C, D, B, E) __builtin_ia32_vfmulcsh_v8hf_mask_round(A, C, D, B, 8)
+#define __builtin_ia32_vfcmulcsh_v8hf_round(A, B, C) __builtin_ia32_vfcmulcsh_v8hf_round(A, B, 8)
+#define __builtin_ia32_vfcmulcsh_v8hf_mask_round(A, C, D, B, E) __builtin_ia32_vfcmulcsh_v8hf_mask_round(A, C, D, B, 8)
/* avx512fp16vlintrin.h */
#define __builtin_ia32_vcmpph_v8hf_mask(A, B, C, D) __builtin_ia32_vcmpph_v8hf_mask(A, B, 1, D)
--
2.18.1
next prev parent reply other threads:[~2021-07-01 6:18 UTC|newest]
Thread overview: 85+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-07-01 6:15 [PATCH 00/62] Support all AVX512FP16 intrinsics liuhongt
2021-07-01 6:15 ` [PATCH 01/62] AVX512FP16: Support vector init/broadcast for FP16 liuhongt
2021-07-01 6:15 ` [PATCH 02/62] AVX512FP16: Add testcase for vector init and broadcast intrinsics liuhongt
2021-07-01 6:15 ` [PATCH 03/62] AVX512FP16: Fix HF vector passing in variable arguments liuhongt
2021-07-01 6:15 ` [PATCH 04/62] AVX512FP16: Add ABI tests for xmm liuhongt
2021-07-01 6:15 ` [PATCH 05/62] AVX512FP16: Add ABI test for ymm liuhongt
2021-07-01 6:15 ` [PATCH 06/62] AVX512FP16: Add abi test for zmm liuhongt
2021-07-01 6:15 ` [PATCH 07/62] AVX512FP16: Add vaddph/vsubph/vdivph/vmulph liuhongt
2021-09-09 7:48 ` Hongtao Liu
2021-07-01 6:15 ` [PATCH 08/62] AVX512FP16: Add testcase for vaddph/vsubph/vmulph/vdivph liuhongt
2021-07-01 6:15 ` [PATCH 09/62] AVX512FP16: Enable _Float16 autovectorization liuhongt
2021-09-10 7:03 ` Hongtao Liu
2021-07-01 6:15 ` [PATCH 10/62] AVX512FP16: Add vaddsh/vsubsh/vmulsh/vdivsh liuhongt
2021-07-01 6:15 ` [PATCH 11/62] AVX512FP16: Add testcase for vaddsh/vsubsh/vmulsh/vdivsh liuhongt
2021-07-01 6:15 ` [PATCH 12/62] AVX512FP16: Add vmaxph/vminph/vmaxsh/vminsh liuhongt
2021-07-01 6:15 ` [PATCH 13/62] AVX512FP16: Add testcase for vmaxph/vmaxsh/vminph/vminsh liuhongt
2021-07-01 6:16 ` [PATCH 14/62] AVX512FP16: Add vcmpph/vcmpsh/vcomish/vucomish liuhongt
2021-07-01 6:16 ` [PATCH 15/62] AVX512FP16: Add testcase for vcmpph/vcmpsh/vcomish/vucomish liuhongt
2021-07-01 6:16 ` [PATCH 16/62] AVX512FP16: Add vsqrtph/vrsqrtph/vsqrtsh/vrsqrtsh liuhongt
2021-09-14 3:50 ` Hongtao Liu
2021-07-01 6:16 ` [PATCH 17/62] AVX512FP16: Add testcase for vsqrtph/vsqrtsh/vrsqrtph/vrsqrtsh liuhongt
2021-07-01 6:16 ` [PATCH 18/62] AVX512FP16: Add vrcpph/vrcpsh/vscalefph/vscalefsh liuhongt
2021-07-01 6:16 ` [PATCH 19/62] AVX512FP16: Add testcase for vrcpph/vrcpsh/vscalefph/vscalefsh liuhongt
2021-07-01 6:16 ` [PATCH 20/62] AVX512FP16: Add vreduceph/vreducesh/vrndscaleph/vrndscalesh liuhongt
2021-07-01 6:16 ` [PATCH 21/62] AVX512FP16: Add testcase for vreduceph/vreducesh/vrndscaleph/vrndscalesh liuhongt
2021-07-01 6:16 ` [PATCH 22/62] AVX512FP16: Add fpclass/getexp/getmant instructions liuhongt
2021-07-01 6:16 ` [PATCH 23/62] AVX512FP16: Add testcase for fpclass/getmant/getexp instructions liuhongt
2021-07-01 6:16 ` [PATCH 24/62] AVX512FP16: Add vmovw/vmovsh liuhongt
2021-09-16 5:08 ` Hongtao Liu
2021-07-01 6:16 ` [PATCH 25/62] AVX512FP16: Add testcase for vmovsh/vmovw liuhongt
2021-07-01 6:16 ` [PATCH 26/62] AVX512FP16: Add vcvtph2dq/vcvtph2qq/vcvtph2w/vcvtph2uw/vcvtph2uqq/vcvtph2udq liuhongt
2021-07-01 6:16 ` [PATCH 27/62] AVX512FP16: Add testcase for vcvtph2w/vcvtph2uw/vcvtph2dq/vcvtph2udq/vcvtph2qq/vcvtph2uqq liuhongt
2021-07-01 6:16 ` [PATCH 28/62] AVX512FP16: Add vcvtuw2ph/vcvtw2ph/vcvtdq2ph/vcvtudq2ph/vcvtqq2ph/vcvtuqq2ph liuhongt
2021-07-01 6:16 ` [PATCH 29/62] AVX512FP16: Add testcase for vcvtw2ph/vcvtuw2ph/vcvtdq2ph/vcvtudq2ph/vcvtqq2ph/vcvtuqq2ph liuhongt
2021-07-01 6:16 ` [PATCH 30/62] AVX512FP16: Add vcvtsh2si/vcvtsh2usi/vcvtsi2sh/vcvtusi2sh liuhongt
2021-09-17 8:07 ` Hongtao Liu
2021-07-01 6:16 ` [PATCH 31/62] AVX512FP16: Add testcase for vcvtsh2si/vcvtsh2usi/vcvtsi2sh/vcvtusi2sh liuhongt
2021-07-01 6:16 ` [PATCH 32/62] AVX512FP16: Add vcvttph2w/vcvttph2uw/vcvttph2dq/vcvttph2qq/vcvttph2udq/vcvttph2uqq liuhongt
2021-07-01 6:16 ` [PATCH 33/62] AVX512FP16: Add testcase for vcvttph2w/vcvttph2uw/vcvttph2dq/vcvttph2udq/vcvttph2qq/vcvttph2uqq liuhongt
2021-07-01 6:16 ` [PATCH 34/62] AVX512FP16: Add vcvttsh2si/vcvttsh2usi liuhongt
2021-07-01 6:16 ` [PATCH 35/62] AVX512FP16: Add vcvtph2pd/vcvtph2psx/vcvtpd2ph/vcvtps2phx liuhongt
2021-07-01 6:16 ` [PATCH 36/62] AVX512FP16: Add testcase for vcvtph2pd/vcvtph2psx/vcvtpd2ph/vcvtps2phx liuhongt
2021-07-01 6:16 ` [PATCH 37/62] AVX512FP16: Add vcvtsh2ss/vcvtsh2sd/vcvtss2sh/vcvtsd2sh liuhongt
2021-07-01 6:16 ` [PATCH 38/62] AVX512FP16: Add testcase for vcvtsh2sd/vcvtsh2ss/vcvtsd2sh/vcvtss2sh liuhongt
2021-07-01 6:16 ` [PATCH 39/62] AVX512FP16: Add intrinsics for casting between vector float16 and vector float32/float64/integer liuhongt
2021-07-01 6:16 ` [PATCH 40/62] AVX512FP16: Add vfmaddsub[132, 213, 231]ph/vfmsubadd[132, 213, 231]ph liuhongt
2021-09-18 7:04 ` Hongtao Liu
2021-07-01 6:16 ` [PATCH 41/62] AVX512FP16: Add testcase for " liuhongt
2021-07-01 6:16 ` [PATCH 42/62] AVX512FP16: Add FP16 fma instructions liuhongt
2021-07-01 6:16 ` [PATCH 43/62] AVX512FP16: Add testcase for " liuhongt
2021-07-01 6:16 ` [PATCH 44/62] AVX512FP16: Add scalar/vector bitwise operations, including liuhongt
2021-07-23 5:13 ` Hongtao Liu
2021-07-26 2:25 ` Hongtao Liu
2021-07-01 6:16 ` [PATCH 45/62] AVX512FP16: Add testcase for fp16 bitwise operations liuhongt
2021-07-01 6:16 ` [PATCH 46/62] AVX512FP16: Enable FP16 mask load/store liuhongt
2021-07-01 6:16 ` [PATCH 47/62] AVX512FP16: Add scalar fma instructions liuhongt
2021-07-01 6:16 ` [PATCH 48/62] AVX512FP16: Add testcase for scalar FMA instructions liuhongt
2021-07-01 6:16 ` [PATCH 49/62] AVX512FP16: Add vfcmaddcph/vfmaddcph/vfcmulcph/vfmulcph liuhongt
2021-09-22 4:38 ` Hongtao Liu
2021-07-01 6:16 ` [PATCH 50/62] AVX512FP16: Add testcases for vfcmaddcph/vfmaddcph/vfcmulcph/vfmulcph liuhongt
2021-07-01 6:16 ` liuhongt [this message]
2021-07-01 6:16 ` [PATCH 52/62] AVX512FP16: Add testcases for vfcmaddcsh/vfmaddcsh/vfcmulcsh/vfmulcsh liuhongt
2021-07-01 6:16 ` [PATCH 53/62] AVX512FP16: Add expander for sqrthf2 liuhongt
2021-07-23 5:12 ` Hongtao Liu
2021-07-01 6:16 ` [PATCH 54/62] AVX512FP16: Add expander for ceil/floor/trunc/roundeven liuhongt
2021-07-01 6:16 ` [PATCH 55/62] AVX512FP16: Add expander for cstorehf4 liuhongt
2021-07-01 6:16 ` [PATCH 56/62] AVX512FP16: Optimize (_Float16) sqrtf ((float) f16) to sqrtf16 (f16) liuhongt
2021-07-01 9:50 ` Richard Biener
2021-07-01 10:23 ` Hongtao Liu
2021-07-01 12:43 ` Richard Biener
2021-07-01 21:48 ` Joseph Myers
2021-07-02 7:38 ` Richard Biener
2021-07-01 21:17 ` Joseph Myers
2021-07-01 6:16 ` [PATCH 57/62] AVX512FP16: Add expander for fmahf4 liuhongt
2021-07-01 6:16 ` [PATCH 58/62] AVX512FP16: Optimize for code like (_Float16) __builtin_ceif ((float) f16) liuhongt
2021-07-01 9:52 ` Richard Biener
2021-07-01 21:26 ` Joseph Myers
2021-07-02 7:36 ` Richard Biener
2021-07-02 11:46 ` Bernhard Reutner-Fischer
2021-07-04 5:17 ` Hongtao Liu
2021-07-01 6:16 ` [PATCH 59/62] AVX512FP16: Support load/store/abs intrinsics liuhongt
2021-09-22 10:30 ` Hongtao Liu
2021-07-01 6:16 ` [PATCH 60/62] AVX512FP16: Add reduce operators(add/mul/min/max) liuhongt
2021-07-01 6:16 ` [PATCH 61/62] AVX512FP16: Add complex conjugation intrinsic instructions liuhongt
2021-07-01 6:16 ` [PATCH 62/62] AVX512FP16: Add permutation and mask blend intrinsics liuhongt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210701061648.9447-52-hongtao.liu@intel.com \
--to=hongtao.liu@intel.com \
--cc=crazylht@gmail.com \
--cc=gcc-patches@gcc.gnu.org \
--cc=hjl.tools@gmail.com \
--cc=jakub@redhat.com \
--cc=ubizjak@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).