From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id 7B1493858D1E; Tue, 7 May 2024 02:57:07 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 7B1493858D1E DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1715050627; bh=KjexWSFao6ci3xuPK88uqWUSEFbSgTLcrayPD0FAl5o=; h=From:To:Subject:Date:From; b=UQO7cbRhWRONrfU3zm0pm/v9AxPx4BSz8MqIbQv5GHIvgFw5LoIkprkvYH8jReyLL 7+YF+hka1NZ8kGCwbRIfgVlmwIxY/63Gys6RsVG13KaCfL3YvzSdmss1BH8QsML/hl L6DIzeFtYFOvWCS5a4LkFKSzlOhhLqEOx834CiNM= From: "lee.imple at gmail dot com" To: gcc-bugs@gcc.gnu.org Subject: [Bug tree-optimization/114966] New: fails to optimize avx2 in-register permute written with std::experimental::simd Date: Tue, 07 May 2024 02:57:07 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: tree-optimization X-Bugzilla-Version: 14.0 X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: lee.imple at gmail dot com X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version bug_status bug_severity priority component assigned_to reporter target_milestone Message-ID: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D114966 Bug ID: 114966 Summary: fails to optimize avx2 in-register permute written with std::experimental::simd Product: gcc Version: 14.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: tree-optimization Assignee: unassigned at gcc dot gnu.org Reporter: lee.imple at gmail dot com Target Milestone: --- This is actually another attempt to permute a simd register with std::experimental::simd, like PR114908, but written differently. Following is the same function written in both std::experimental::simd and = GNU vector extension versions (available online at https://godbolt.org/z/n3Wvqc= ePo ). The purpose is to permute the register from [w, x, y, z] into [0, w, x, y]. ```c++ #include #include namespace stdx =3D std::experimental; using data_t =3D std::uint64_t; constexpr std::size_t data_size =3D 4; template using simd_of =3D std::experimental::simd>; using simd_t =3D simd_of; // stdx version simd_t permute_simd(simd_t data) { return simd_t([=3D](auto i) -> data_t { constexpr size_t index =3D i - 1; if constexpr (index < data_size) { return data[index]; } else { return 0; } }); } typedef data_t vector_t [[gnu::vector_size(data_size * sizeof(data_t))]]; // gnu vector extension version vector_t permute_vector(vector_t data) { return __builtin_shufflevector(data, vector_t{0}, 4, 0, 1, 2); } ``` The code is compiled with the options `-O3 -march=3Dx86-64-v3 -std=3Dc++20`. Although they should have the same functionality, generated assembly (by GC= C) is so different. ```asm permute_simd(std::experimental::parallelism_v2::simd >): vmovq %xmm0, %rax vpsrldq $8, %xmm0, %xmm1 vextracti128 $0x1, %ymm0, %xmm0 vpunpcklqdq %xmm0, %xmm1, %xmm1 vpxor %xmm0, %xmm0, %xmm0 vpinsrq $1, %rax, %xmm0, %xmm0 vinserti128 $0x1, %xmm1, %ymm0, %ymm0 ret permute_vector(unsigned long __vector(4)): vpxor %xmm1, %xmm1, %xmm1 vpermq $144, %ymm0, %ymm0 vpblendd $3, %ymm1, %ymm0, %ymm0 ret ``` However, Clang can optimize `permute_simd` into the same assembly as `permute_vector`, so I think, instead of a bug in the std::experimental::si= md, it is a missed optimization in GCC. ```asm permute_simd(std::experimental::parallelism_v2::simd >): # @permute_simd(std::experimental::parallelism_v2::simd >) vpermpd $144, %ymm0, %ymm0 # ymm0 =3D ymm0[0,0,1,2] vxorps %xmm1, %xmm1, %xmm1 vblendps $3, %ymm1, %ymm0, %ymm0 # ymm0 =3D ymm1[0,1],ymm0[2,3,4,5,6,7] retq permute_vector(unsigned long __vector(4)): # @permute_vector(unsigned long __vector(4)) vpermpd $144, %ymm0, %ymm0 # ymm0 =3D ymm0[0,0,1,2] vxorps %xmm1, %xmm1, %xmm1 vblendps $3, %ymm1, %ymm0, %ymm0 # ymm0 =3D ymm1[0,1],ymm0[2,3,4,5,6,7] retq ```=