From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id 030B63857809; Wed, 13 Mar 2024 13:56:18 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 030B63857809 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1710338179; bh=WT1fMEdjbG3XYjSWKiqb6IrTazcvBhn6rVybU30vKQY=; h=From:To:Subject:Date:In-Reply-To:References:From; b=dyB3C0j3PmvY9iMkipi4zTVXTMuha4gaodJbQxunV1Ghi8g0bnLeWLvlNEk6xzNdR a9zy7XI64bjBznomI5yGNsg3GKUgbSD3kdXH2t+xSRfOj+8W9KUkPjsN7XTB1y8R8E WjuFag0vPIeNLMMDxMwgi0dH4fL9Ub7hOP7BEuNw= From: "amonakov at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug rtl-optimization/114261] [13/14 Regression] Scheduling takes excessive time (97%) since r13-5154-g733a1b777f1 Date: Wed, 13 Mar 2024 13:56:16 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: rtl-optimization X-Bugzilla-Version: 14.0 X-Bugzilla-Keywords: compile-time-hog X-Bugzilla-Severity: normal X-Bugzilla-Who: amonakov at gcc dot gnu.org X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: P2 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: 13.3 X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D114261 --- Comment #8 from Alexander Monakov --- If we want to get rid of the compilation time regression sooner rather than later, I can suggest limiting my change only to functions that call setjmp: diff --git a/gcc/sched-deps.cc b/gcc/sched-deps.cc index c23218890f..ae23f55274 100644 --- a/gcc/sched-deps.cc +++ b/gcc/sched-deps.cc @@ -3695,7 +3695,7 @@ deps_analyze_insn (class deps_desc *deps, rtx_insn *i= nsn) CANT_MOVE (insn) =3D 1; - if (!reload_completed) + if (!reload_completed && cfun->calls_setjmp) { /* Scheduling across calls may increase register pressure by extending live ranges of pseudos over the call. Worse, in presence of setjmp That way we retain the "correctness fix" part of r13-5154-g733a1b777f1 and = keep the previous status quo on normal functions (quadraticness on asms like demonstrated in comment #5 would also remain).=