From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id 1E8643858401; Tue, 11 Oct 2022 10:51:07 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 1E8643858401 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1665485467; bh=eiqTJlO6vPy18JW4OZxYuWXc1Bb3/lyIprE61HGb9M8=; h=From:To:Subject:Date:In-Reply-To:References:From; b=mGoiGrLrzq9a0PUBMp03Ha3Vw5WQ8JAPFbAMm5vkBUtYLX2yBFgOBHcVzDq6qvRQR 0zJdna1lUvKvhXC6JYcDgXFSpmLgUNXY6o/5xlOGyIEan75jckly+DWvucvms+PesW gwvH7mjcuyK9vA6RmZfF7fGIa4tWIn3WxU1+v54c= From: "rguenth at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug target/107093] AVX512 mask operations not simplified in fully masked loop Date: Tue, 11 Oct 2022 10:51:06 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: target X-Bugzilla-Version: 13.0 X-Bugzilla-Keywords: missed-optimization X-Bugzilla-Severity: normal X-Bugzilla-Who: rguenth at gcc dot gnu.org X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D107093 --- Comment #6 from Richard Biener --- (In reply to Hongtao.liu from comment #4) > change "*k, CBC" to "?k, CBC", in *mov{qi,hi,si,di}_internal. > then RA works good to choose kxnor for setting constm1_rtx to mask regist= er, > and i got below with your attached patch(change #if 0 to #if 1), seems > better than orginal patch. >=20 > 6foo: > 7.LFB0: > 8 .cfi_startproc > 9 testl %edi, %edi > 10 jle .L9 > 11 kxnorb %k1, %k1, %k1 > 12 cmpl $4, %edi > 13 jl .L11 > 14.L3: > 15 vbroadcastsd .LC2(%rip), %ymm3 > 16 vmovdqa .LC0(%rip), %xmm2 > 17 xorl %eax, %eax > 18 xorl %ecx, %ecx > 19 .p2align 4,,10 > 20 .p2align 3 > 21.L7: > 22 vmovapd b(%rax), %ymm0{%k1} > 23 addl $4, %ecx > 24 movl %edi, %edx > 25 vmulpd %ymm3, %ymm0, %ymm1 > 26 subl %ecx, %edx > 27 cmpl $4, %edx > 28 vmovapd %ymm1, a(%rax){%k1} > 29 vpbroadcastd %edx, %xmm1 > 30 movl $-1, %edx > 31 vpcmpd $1, %xmm1, %xmm2, %k1 > 32 kmovb %k1, %esi > 33 cmovge %edx, %esi not sure if the round-trip to GPRs for the sake of a cmovge is worth, I guess a branch would be better.=