From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id 98224386FC2E; Fri, 18 Jun 2021 01:37:20 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 98224386FC2E From: "luoxhu at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug target/100866] PPC: Inefficient code for vec_revb(vector unsigned short) < P9 Date: Fri, 18 Jun 2021 01:37:20 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: target X-Bugzilla-Version: 8.3.1 X-Bugzilla-Keywords: missed-optimization X-Bugzilla-Severity: enhancement X-Bugzilla-Who: luoxhu at gcc dot gnu.org X-Bugzilla-Status: NEW X-Bugzilla-Resolution: X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-BeenThere: gcc-bugs@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-bugs mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 18 Jun 2021 01:37:20 -0000 https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D100866 --- Comment #6 from luoxhu at gcc dot gnu.org --- For V4SI, it is also better to use vector splat and vector rotate operation= s. revb: .LFB0: .cfi_startproc vspltish %v1,8 vspltisw %v0,-16 vrlh %v2,%v2,%v1 vrlw %v2,%v2,%v0 blr Performance improved from 7.322s to 2.445s with a small benchmark due to lo= ad instruction replaced. But for V2DI, we don't have "vspltisd" to splat {32,32} to vector register before Power9, so lvx is still required? vector unsigned long long revb_pwr7_l(vector unsigned long long a) { return vec_rl(a, vec_splats((unsigned long long)32)); }=20 generates: revb_pwr7_l: .LFB1: .cfi_startproc .LCF1: 0: addis 2,12,.TOC.-.LCF1@ha addi 2,2,.TOC.-.LCF1@l .localentry revb_pwr7_l,.-revb_pwr7_l addis %r9,%r2,.LC0@toc@ha addi %r9,%r9,.LC0@toc@l lvx %v0,0,%r9 vrld %v2,%v2,%v0 blr .LC0: .quad 32 .quad 32 .align 4=