From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id E3D9B385841A; Thu, 7 Mar 2024 16:11:10 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org E3D9B385841A DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1709827870; bh=xyUcf6HnhGHbKlrTsBnZFTZ0WutHvsDlyX019Sq4B+I=; h=From:To:Subject:Date:In-Reply-To:References:From; b=quuDGy5qUJVeM/K61ek8ekdFvdTtRuqsTn+CgPfltjMZ7nGzpkQ6CgNup9O+7+MCg 4981UxUn2TwBEzeWOBT8O+QTll/y44vSLIpQdsTG2JoGq1QmKHaEaQrjda84/vtclY 8/Ee6/yls6Qv2a8za4B4gErDHGvfz8WosZenQAFk= From: "segher at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug rtl-optimization/101523] Huge number of combine attempts Date: Thu, 07 Mar 2024 16:11:10 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: rtl-optimization X-Bugzilla-Version: 12.0 X-Bugzilla-Keywords: compile-time-hog, memory-hog X-Bugzilla-Severity: normal X-Bugzilla-Who: segher at gcc dot gnu.org X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: segher at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D101523 --- Comment #24 from Segher Boessenkool --- (In reply to Andreas Krebbel from comment #21) > Wouldn't it in this particular case be possible to recognize already in > try_combine that separating the move out of the parallel cannot lead to > additional optimization opportunities? To me it looks like we are just > recreating the situation we had before merging the INSNs into a parallel.= Is > there a situation where this could lead to any improvement in the end? It might be possible. It's not trivial at all though, esp. if you consider other patterns, other targets, everything. Anything that grossly reduces what we try will not fly. This testcase is very degenerate, if we can recognise something about that and make combine handle that better, that could be done. Or I'll do my proposed "do not try more than 40 billion things" patch. As it is now, combine only ever reconsiders anything if it *did* make chang= es. So, if you see it reconsidering things a lot, you also see it making a lot = of changes. And all those changes make for materially better generated code (= that is tested by combine always, before making changes). Changing things so combine makes fewer changes directly means you want it to optimise less well.=