From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id 82346385B502; Sat, 26 Nov 2022 18:36:10 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 82346385B502 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1669487770; bh=KJ6Gq5VqO8pUe1EChTw0hrjQ/BSJTDGccrWMyu7RPPQ=; h=From:To:Subject:Date:In-Reply-To:References:From; b=m89Yr9fIzU4Xm2XkJKjk4SoFq9v4oFNG8Xtou7NDA+/hzhVTkx5bqBhCA5gKIbWTu HgcyzhOzzohTixpn8qAC3d2wStjBsy9p46JNZQIW9c1ehTfOjSSxauVTa66S8s3/SM xUQK+2hfD22tDQIsD0rSKkHrfstEa6ypR3oWpZoE= From: "already5chosen at yahoo dot com" To: gcc-bugs@gcc.gnu.org Subject: [Bug tree-optimization/97832] AoSoA complex caxpy-like loops: AVX2+FMA -Ofast 7 times slower than -O3 Date: Sat, 26 Nov 2022 18:36:09 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: tree-optimization X-Bugzilla-Version: 10.2.0 X-Bugzilla-Keywords: missed-optimization X-Bugzilla-Severity: normal X-Bugzilla-Who: already5chosen at yahoo dot com X-Bugzilla-Status: RESOLVED X-Bugzilla-Resolution: FIXED X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: rguenth at gcc dot gnu.org X-Bugzilla-Target-Milestone: 12.0 X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D97832 --- Comment #20 from Michael_S --- (In reply to Richard Biener from comment #17) > (In reply to Michael_S from comment #16) > > On unrelated note, why loop overhead uses so many instructions? > > Assuming that I am as misguided as gcc about load-op combining, I would > > write it as: > > sub %rax, %rdx > > .L3: > > vmovupd (%rdx,%rax), %ymm1 > > vmovupd 32(%rdx,%rax), %ymm0 > > vfmadd213pd 32(%rax), %ymm3, %ymm1 > > vfnmadd213pd (%rax), %ymm2, %ymm0 > > vfnmadd231pd 32(%rdx,%rax), %ymm3, %ymm0 > > vfnmadd231pd (%rdx,%rax), %ymm2, %ymm1 > > vmovupd %ymm0, (%rax) > > vmovupd %ymm1, 32(%rax) > > addq $64, %rax > > decl %esi > > jb .L3 > >=20=20=20 > > The loop overhead in my variant is 3 x86 instructions=3D=3D2 macro-ops, > > vs 5 x86 instructions=3D=3D4 macro-ops in gcc variant. > > Also, in gcc variant all memory accesses have displacement that makes t= hem > > 1 byte longer. In my variant only half of accesses have displacement. > >=20 > > I think, in the past I had seen cases where gcc generates optimal or > > near-optimal > > code sequences for loop overhead. I wonder why it can not do it here. >=20 > I don't think we currently consider IVs based on the difference of two > addresses.=20=20 It seems to me that I had seen you doing it. But, may be, I confuse gcc with clang. > The cost benefit of no displacement is only size,=20 Size is pretty important in high-IPC SIMD loops. Esp. on Intel and when # of iterations is small, because Intel has 16-byte fetch out of L1I cache. SIMD instructions tend to be long and not many instructions fit within 16 bytes = even when memory accesses have no offsets. Offset adds impact to the injury. > otherwise > I have no idea why we have biased the %rax accesses by -32. Why we > fail to consider decrement-to-zero for the counter IV is probably because > IVCANON would add such IV but the vectorizer replaces that and IVOPTs > doesn't consider re-adding that. Sorry, I have no idea about the meaning of IVCANON.=