From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id 6EC833858D28; Thu, 21 Mar 2024 07:56:07 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 6EC833858D28 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1711007767; bh=HfiESt7Kw9d7Qp/BbjTMNZ4FnVefpcrZWbXfrb2mFEU=; h=From:To:Subject:Date:In-Reply-To:References:From; b=YiMRtJG2CqSnKx1Rk07HXWafroCdnPeMJEwrQsmKGoq8UBfcIUe5GxFitEuLQ5ftx TqU5IVmTJvPGOtzEqig/7+4fDPUyfHh85R9UDFShN72cD/qCIrf3fpXU3BQbTc7aXl fIIedoZCc4pp5dMYmkcjhvMIybfIRvLIPmXg3m9g= From: "segher at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug target/101523] Huge number of combine attempts Date: Thu, 21 Mar 2024 07:56:06 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: target X-Bugzilla-Version: 12.0 X-Bugzilla-Keywords: compile-time-hog, memory-hog X-Bugzilla-Severity: normal X-Bugzilla-Who: segher at gcc dot gnu.org X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: segher at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D101523 --- Comment #35 from Segher Boessenkool --- (In reply to Richard Biener from comment #34) > The change itself looks reasonable given costs, though maybe 2 -> 2 > combinations should not trigger when the cost remains the same? In > this case it definitely doesn't look profitable, does it? Well, > in theory it might hide latency and the 2nd instruction can issue > at the same time as the first. No, it definitely should be done. As I showed back then, it costs less tha= n 1% extra compile time on *any platform* on average, and it reduced code size by 1%-2% everywhere. It also cannot get stuck, any combination is attempted only once, any combination that succeeds eats up a loglink. It is finite, (almost) linear in fact. Any backend is free to say certain insns shouldn't combine at all. This wi= ll lead to reduced performance though. - ~ - ~ - Something that is the real complaint here: it seems we do not GC often enou= gh, only after processing a BB (or EBB)? That adds up for artificial code like this, sure. And the "param to give an upper limit to how many combination attempts are = done (per BB)" offer is on the table still, too. I don't think it would ever be useful (if you want your code to compile faster just write better code!), b= ut :-)=