From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id 16C343858CDA; Thu, 21 Mar 2024 13:22:46 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 16C343858CDA DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1711027366; bh=0x6kepfr+j+QwNgFtKOSIsgWGdEdU7EOvudKUryK9pY=; h=From:To:Subject:Date:In-Reply-To:References:From; b=mQXJcnPZxMcFpFDZSmuTdU2lQraIznbPRRSWaA41daRlaueJqevgLU/2UtS8n6YJP 0AOX6cdTL/2zAmszAaNCTgfltgzHTm8xdpJ7Gvmrln+Knph3Vfb2OwzmHYX1sJAxs8 4+KaNVw/ac6wZq6NFyxjwSYStUeywOq+jaNjZlu4= From: "rguenth at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug target/101523] Huge number of combine attempts Date: Thu, 21 Mar 2024 13:22:45 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: target X-Bugzilla-Version: 12.0 X-Bugzilla-Keywords: compile-time-hog, memory-hog X-Bugzilla-Severity: normal X-Bugzilla-Who: rguenth at gcc dot gnu.org X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: segher at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D101523 --- Comment #41 from Richard Biener --- (In reply to Segher Boessenkool from comment #38) > (In reply to Richard Biener from comment #36) [...] > But linear is linear, and stays linear, for way too big code it is just as > acceptable as for "normal" code. Just slow. If you don't want the compi= ler > to > take a long time compiling your way too big code, use -O0, or preferably = do > not > write insane code in the first place :-) ;) We promise to try to behave reasonably with insane code, but technically we tell people to use at most -O1 for that. That will at least avoid trying three and four insn combinations. [...] > Ideally we'll not do *any* artificial limitations. I agree. And we should try hard to fix actual algorithmic problems if they exist before resorting to limits. > But GCC just throws its hat > in the ring in other cases as well, say, too big RA problems. You do get= not > as high quality code as wanted, but at least you get something compiled in > an acceptable timeframe :-) Yep. See above for my comment about -O1. I think it's fine to take time (and memory) to optimize high quality code at -O2. And if you throw insane code to GCC then also an insane amount of time and memory ;) So I do wonder whether with -O1 the issue is gone anyway already? If not then for the sake of -O1 and insane we want such limit. It can be more crude aka just count all attempts and stop alltogether, or like PRE, just not PRE when the number of pseudos/blocks crosses a magic barrier. I just thought combine is a bit a too core part of our instruction selection so disabling it completely (after some point) would be too bad even for insane code ... Andreas - can you try --param max-combine-insns=3D2 please? That is I think what -O1 uses and only then does two-insn combinations.=