From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id 24E5F3858D33; Mon, 28 Aug 2023 07:37:23 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 24E5F3858D33 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1693208243; bh=07XPYyA24BgzUVzV81YrjxIDxb2jWmsmm+LEO+Y5xTI=; h=From:To:Subject:Date:In-Reply-To:References:From; b=fSxxap2wmsua4La+/JyFkJOuar7gmbudk42qtOgk1kyMJA1t54MfEXVCIJhZyWXno JE/xilozjY56eb6MnR5PrnJu7baWTBbBIBNWu9+Y+vMWHHI2d+mZvSpzxfmLRz64sY JsTxk1pcAc8N1kEjOjCDEjcfats+IRK4AJPW+qcg= From: "rguenth at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug target/111166] gcc unnecessarily creates vector operations for packing 32 bit integers into struct (x86_64) Date: Mon, 28 Aug 2023 07:37:21 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: target X-Bugzilla-Version: 13.2.1 X-Bugzilla-Keywords: missed-optimization X-Bugzilla-Severity: normal X-Bugzilla-Who: rguenth at gcc dot gnu.org X-Bugzilla-Status: NEW X-Bugzilla-Resolution: X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_status cf_reconfirmed_on everconfirmed Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D111166 Richard Biener changed: What |Removed |Added ---------------------------------------------------------------------------- Status|UNCONFIRMED |NEW Last reconfirmed| |2023-08-28 Ever confirmed|0 |1 --- Comment #1 from Richard Biener --- Confirmed. We vectorize the scalar code: > ./cc1 -quiet t.i -O2 -fopt-info-vec -fdump-tree-slp-details weird_gcc_behaviour.c:15:41: optimized: basic block part vectorized using 16 byte vectors generating uint64_t turn_into_struct (u32 a, u32 b, u32 c, u32 d) { u32 * vectp.4; vector(4) unsigned int * vectp.3; struct quad_u32 D.2865; uint64_t _11; vector(4) unsigned int _13; [local count: 1073741824]: _13 =3D {a_2(D), b_4(D), c_6(D), d_8(D)}; MEM [(unsigned int *)&D.2865] =3D _13; _11 =3D do_smth_with_4_u32 (D.2865); D.2865 =3D{v} {CLOBBER(eol)}; return _11; and weird_gcc_behaviour.c:15:41: note: Cost model analysis: a_2(D) 1 times scalar_store costs 12 in body b_4(D) 1 times scalar_store costs 12 in body c_6(D) 1 times scalar_store costs 12 in body d_8(D) 1 times scalar_store costs 12 in body a_2(D) 1 times vector_store costs 12 in body node 0x5d35f70 1 times vec_construct costs 36 in prologue weird_gcc_behaviour.c:15:41: note: Cost model analysis for part in loop 0: Vector cost: 48 Scalar cost: 48 weird_gcc_behaviour.c:15:41: note: Basic block will be vectorized using SLP we are choosing the vector side at same cost because we assume it would win on code size. Practically a vector store instead of a scalar store is also good for store forwarding. We get turn_into_struct: .LFB0: .cfi_startproc movd %edi, %xmm1 movd %esi, %xmm4 movd %edx, %xmm0 movd %ecx, %xmm3 punpckldq %xmm4, %xmm1 punpckldq %xmm3, %xmm0 movdqa %xmm1, %xmm2 punpcklqdq %xmm0, %xmm2 movaps %xmm2, -24(%rsp) movq -24(%rsp), %rdi movq -16(%rsp), %rsi jmp do_smth_with_4_u32 instead of (-fno-tree-vectorize) turn_into_struct: .LFB0: .cfi_startproc xorl %eax, %eax movl %ecx, %r8d movl %edi, %ecx movl %edx, %r9d movabsq $-4294967296, %r10 movq %rax, %rdi xorl %edx, %edx salq $32, %r8 andq %r10, %rdi orq %rcx, %rdi movq %rsi, %rcx salq $32, %rcx movl %edi, %esi orq %rcx, %rsi movq %rdx, %rcx andq %r10, %rcx movq %rsi, %rdi orq %r9, %rcx movl %ecx, %ecx orq %r8, %rcx movq %rcx, %rsi jmp do_smth_with_4_u32 and our guess for code-size is correct (47 bytes for vector, 67 for scalar). The latency for the scalar code is also quite a bit bigger. The spilling should be OK, the store should forward nicely. Unless you can come up with an actual benchmark showing the vector code is slower I'd say it's not. Given it's smaller it should win on the icache side if not executed frequently as well. So - not a bug? The spilling could be avoided by using movq, movhlps + movq, but it's call handling so possibly difficult to achieve.=