From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id 7205F3855166; Fri, 25 Nov 2022 13:19:40 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 7205F3855166 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1669382380; bh=BTuatTxE849EMjhvfGeBbvxEmXfUlYvTeOoQSE7bJSc=; h=From:To:Subject:Date:In-Reply-To:References:From; b=gna6kBxWFLaC+yvM4r7nRWQ6pAaTrsh+MrpzK6fA3GQuoQIh5jFI5/UJmYrJaLCs9 iY6N1m+iDVPC5Bivj1OjgtiqGBT7uHPGuuVd4agZWheFVBxssuexT4j16k0xq5jTmd 9MFyr3+CMgGEYFqSePCMmGPcBE1nkiOuvWGFPL1E= From: "already5chosen at yahoo dot com" To: gcc-bugs@gcc.gnu.org Subject: [Bug tree-optimization/97832] AoSoA complex caxpy-like loops: AVX2+FMA -Ofast 7 times slower than -O3 Date: Fri, 25 Nov 2022 13:19:37 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: tree-optimization X-Bugzilla-Version: 10.2.0 X-Bugzilla-Keywords: missed-optimization X-Bugzilla-Severity: normal X-Bugzilla-Who: already5chosen at yahoo dot com X-Bugzilla-Status: RESOLVED X-Bugzilla-Resolution: FIXED X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: rguenth at gcc dot gnu.org X-Bugzilla-Target-Milestone: 12.0 X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D97832 --- Comment #16 from Michael_S --- On unrelated note, why loop overhead uses so many instructions? Assuming that I am as misguided as gcc about load-op combining, I would wri= te it as: sub %rax, %rdx .L3: vmovupd (%rdx,%rax), %ymm1 vmovupd 32(%rdx,%rax), %ymm0 vfmadd213pd 32(%rax), %ymm3, %ymm1 vfnmadd213pd (%rax), %ymm2, %ymm0 vfnmadd231pd 32(%rdx,%rax), %ymm3, %ymm0 vfnmadd231pd (%rdx,%rax), %ymm2, %ymm1 vmovupd %ymm0, (%rax) vmovupd %ymm1, 32(%rax) addq $64, %rax decl %esi jb .L3 The loop overhead in my variant is 3 x86 instructions=3D=3D2 macro-ops, vs 5 x86 instructions=3D=3D4 macro-ops in gcc variant. Also, in gcc variant all memory accesses have displacement that makes them 1 byte longer. In my variant only half of accesses have displacement. I think, in the past I had seen cases where gcc generates optimal or near-optimal code sequences for loop overhead. I wonder why it can not do it here.=