From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id F125E3858423; Tue, 23 May 2023 12:55:14 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org F125E3858423 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1684846514; bh=gg5daW13GNskqOCTxnIIk+S1EMD/fzciXOHBvS4cl+w=; h=From:To:Subject:Date:In-Reply-To:References:From; b=ljSo+cMTJJvTONdxyStM5ZTklTFHgI3IG2Rh7d/hEbsV7E3c7oifL6AsoMHcFLDse 8UeUbD+RYNAvPqsaYOobZSgNEGnBhjAioSZXIQKIYjqUOeGIBiiO9PKfRVSUrTZzGF +z22kDc2CAzeZmDkZCOuIv9Qkvwm4jNe7c9jecpo= From: "rguenth at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug target/108724] [11 Regression] Poor codegen when summing two arrays without AVX or SSE Date: Tue, 23 May 2023 12:55:13 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: target X-Bugzilla-Version: 13.0 X-Bugzilla-Keywords: missed-optimization X-Bugzilla-Severity: normal X-Bugzilla-Who: rguenth at gcc dot gnu.org X-Bugzilla-Status: ASSIGNED X-Bugzilla-Resolution: X-Bugzilla-Priority: P2 X-Bugzilla-Assigned-To: rguenth at gcc dot gnu.org X-Bugzilla-Target-Milestone: 11.4 X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D108724 --- Comment #10 from Richard Biener --- On trunk we're back to vectorizing but as intended with DImode which makes = us save half of the loads and stores and we think the extended required arithm= etic covers up for that (by quite some margin). movabsq $9223372034707292159, %rcx movq (%rdx), %rax movq (%rsi), %rsi movq %rcx, %rdx andq %rax, %rdx andq %rsi, %rcx xorq %rsi, %rax addq %rcx, %rdx movabsq $-9223372034707292160, %rcx andq %rcx, %rax xorq %rdx, %rax movq %rax, (%rdi) vs movl (%rdx), %eax addl (%rsi), %eax movl %eax, (%rdi) movl 4(%rdx), %eax addl 4(%rsi), %eax movl %eax, 4(%rdi)=