From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id 198233858D35; Thu, 7 Mar 2024 09:34:00 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 198233858D35 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1709804041; bh=fFs5F5g1TKVwMvp7RJmSyMsy+mMYlghVUQl0L4RcxrU=; h=From:To:Subject:Date:In-Reply-To:References:From; b=ayqW2CYgKR3o/rKKN0FVgpEmYAJ9aCb3TtXPoHDpPA84ab34ntpOd1C7xiOwB+gww zpV+S07Il/n7LAKS97Q9lyV01gWC3V4lL2W14R3H4g/yM9fpoarciQIO+6CS4v7FOO nL05QJV7i28/54oEP69CG2zysLZEjU/I3Lfm6OUw= From: "krebbel at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug rtl-optimization/101523] Huge number of combine attempts Date: Thu, 07 Mar 2024 09:34:00 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: rtl-optimization X-Bugzilla-Version: 12.0 X-Bugzilla-Keywords: compile-time-hog, memory-hog X-Bugzilla-Severity: normal X-Bugzilla-Who: krebbel at gcc dot gnu.org X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: segher at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D101523 --- Comment #21 from Andreas Krebbel --- (In reply to Segher Boessenkool from comment #16) ... > When some insns have changed (or might have changed, combine does not alw= ays > know > the details), combinations of the insn with later insns are tried again.= =20 > Sometimes > this finds new combination opportunities. >=20 > Not retrying combinations after one of the insns has changed would be a > regression. Wouldn't it in this particular case be possible to recognize already in try_combine that separating the move out of the parallel cannot lead to additional optimization opportunities? To me it looks like we are just recreating the situation we had before merging the INSNs into a parallel. Is there a situation where this could lead to any improvement in the end?=