From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id AF138384F489; Fri, 18 Nov 2022 11:46:40 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org AF138384F489 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1668772000; bh=0vfDHZPth+2S8GDNjowYfDj0dubWjVdUtq5Q1u926YQ=; h=From:To:Subject:Date:In-Reply-To:References:From; b=NHKjLXukUTo9Haqi/LJaACxD1cTxU7FIRc2X2EDd403ZjQ7Rf7j8Rbzj9iS3nJW3i YDyBi5omIfsPdkpnkkyX53CTnwvMhFiQSTRKkJze+RAkJcmOKJhdfxF/uUElbG7X0U 3LN0Eb2tBEnSMnenGyeCE2ftzBNVJJM1XlG1gB8w= From: "jakub at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug target/107748] [13 Regression] Isn't _mm_cvtsbh_ss incorrect? Date: Fri, 18 Nov 2022 11:46:40 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: target X-Bugzilla-Version: 13.0 X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: jakub at gcc dot gnu.org X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: P1 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: 13.0 X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D107748 --- Comment #3 from Jakub Jelinek --- (In reply to Hongtao.liu from comment #2) > float > _mm_cvtsbh_ss (__bf16 __A) > { > union{ float sf; __bf16 bf[2];} __tmp; > __tmp.sf =3D 0.0f; > __tmp.bf[1] =3D __A; > return __tmp.sf; > } >=20 > Looks like gcc can optimize it to >=20 > _mm_cvtsbh_ss(bool _Accum): > movd %xmm0, %eax > sall $16, %eax > movd %eax, %xmm0 > ret That is an option too, but please uglify with __ the sf and bf identifiers above. Also, not just for this but more importantly for the __bf16 -> float conversions gcc emits for -ffast-math or for cstorebf4 or cbranchcc4, it would be nice = if we optimized those so that if the source and destination are in SSE registe= rs that we don't convert from SSE to GPR, shift and convert back from GPR to S= SE, while we could do it through some permutation of the SSE register that just pretends it is a V*HImode and moves the first element to second and zeros t= he first (and perhaps all elements above second too, or not, whatever is faste= r). Dunno if it could be done as a peephole2, or something different. Just try: __attribute__((optimize ("fast-math"))) float foo (__bf16 x) { return x; } int bar (__bf16 x, __bf16 y) { return x =3D=3D y; } void baz (void); void qux (__bf16 x, __bf16 y) { if (x =3D=3D y) baz (); } Oh, and one more thing, for -mavx512bf16 -mavx512vl -ffast-math it would be nice to use the AVX512BF16 instruction for float -> __bf16 conversions rather th= an library routine. But that instruction doesn't handle sNaNs properly and flushes subnormals to 0, so I think we shouldn't do it if HONORS_NANS (BFmo= de) or !flag_unsafe_math_optimizations.=