From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id 80FA53955CBB; Mon, 13 Nov 2023 09:06:43 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 80FA53955CBB DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1699866403; bh=iZddh8tqtOuIyanRD4c4HoWBKsIX683HkyF6s+EREuQ=; h=From:To:Subject:Date:In-Reply-To:References:From; b=LlUeWWXv2Q5ziT1d5sPdjfvoLbvNmKVUO6tFKKzjlpB7aE/MCCW+sFuVshNtZod5u cpf+TjjCeEHOzadDImrpXv9q4OztwB+kEQf8ySB5qqmlcul83/oirfTaJcLoeN0RiN yRqw7AmQ3bO8ANSkm6UXqyHL51CgfkR6lhzToDTk= From: "cvs-commit at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug rtl-optimization/97756] [11/12/13/14 Regression] Inefficient handling of 128-bit arguments Date: Mon, 13 Nov 2023 09:06:42 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: rtl-optimization X-Bugzilla-Version: 11.0 X-Bugzilla-Keywords: missed-optimization, ra X-Bugzilla-Severity: normal X-Bugzilla-Who: cvs-commit at gcc dot gnu.org X-Bugzilla-Status: NEW X-Bugzilla-Resolution: X-Bugzilla-Priority: P2 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: 11.5 X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D97756 --- Comment #14 from CVS Commits --- The master branch has been updated by Roger Sayle : https://gcc.gnu.org/g:0a140730c970870a5125beb1114f6c01679a040e commit r14-5385-g0a140730c970870a5125beb1114f6c01679a040e Author: Roger Sayle Date: Mon Nov 13 09:05:16 2023 +0000 i386: Improve reg pressure of double word right shift then truncate. This patch improves register pressure during reload, inspired by PR 977= 56. Normally, a double-word right-shift by a constant produces a double-word result, the highpart of which is dead when followed by a truncation. The dead code calculating the high part gets cleaned up post-reload, so the issue isn't normally visible, except for the increased register pressure during reload, sometimes leading to odd register assignments. Providing a post-reload splitter, which clobbers a single wordmode result register instead of a doubleword result register, helps (a bit). An example demonstrating this effect is: unsigned long foo (__uint128_t n) { unsigned long a =3D n & MASK60; unsigned long b =3D (n >> 60); b =3D b & MASK60; unsigned long c =3D (n >> 120); return a+b+c; } which currently with -O2 generates (13 instructions): foo: movabsq $1152921504606846975, %rcx xchgq %rdi, %rsi movq %rsi, %rax shrdq $60, %rdi, %rax movq %rax, %rdx movq %rsi, %rax movq %rdi, %rsi andq %rcx, %rax shrq $56, %rsi andq %rcx, %rdx addq %rsi, %rax addq %rdx, %rax ret with this patch, we generate one less mov (12 instructions): foo: movabsq $1152921504606846975, %rcx xchgq %rdi, %rsi movq %rdi, %rdx movq %rsi, %rax movq %rdi, %rsi shrdq $60, %rdi, %rdx andq %rcx, %rax shrq $56, %rsi addq %rsi, %rax andq %rcx, %rdx addq %rdx, %rax ret The significant difference is easier to see via diff: < shrdq $60, %rdi, %rax < movq %rax, %rdx --- > shrdq $60, %rdi, %rdx Admittedly a single "mov" isn't much of a saving on modern architecture= s, but as demonstrated by the PR, people still track the number of them. 2023-11-13 Roger Sayle gcc/ChangeLog * config/i386/i386.md (3_doubleword_lowpart): New define_insn_and_split to optimize register usage of doubleword right shifts followed by truncation.=