public inbox for gcc-bugs@sourceware.org help / color / mirror / Atom feed
From: "tneumann at users dot sourceforge.net" <gcc-bugzilla@gcc.gnu.org> To: gcc-bugs@gcc.gnu.org Subject: [Bug libgcc/107675] [13 Regression] GCC-13 is significantly slower to startup on C++ statically linked programs Date: Fri, 16 Dec 2022 23:55:21 +0000 [thread overview] Message-ID: <bug-107675-4-vAgPJxjroo@http.gcc.gnu.org/bugzilla/> (raw) In-Reply-To: <bug-107675-4@http.gcc.gnu.org/bugzilla/> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107675 --- Comment #16 from Thomas Neumann <tneumann at users dot sourceforge.net> --- I have committed a fix: commit 6e56633daae79f514b0e71f4d9849bcd8d9ce71f Author: Thomas Neumann Date: Fri Dec 9 18:23:44 2022 +0100 initialize fde objects lazily When registering an unwind frame with __register_frame_info_bases we currently initialize that fde object eagerly. This has the advantage that it is immutable afterwards and we can safely access it from multiple threads, but it has the disadvantage that we pay the initialization cost even if the application never throws an exception. This commit changes the logic to initialize the objects lazily. The objects themselves are inserted into the b-tree when registering the frame, but the sorted fde_vector is not constructed yet. Only on the first time that an exception tries to pass through the registered code the object is initialized. We notice that with a double checking, first doing a relaxed load of the sorted bit and then re-checking under a mutex when the object was not initialized yet. Note that the check must implicitly be safe concering a concurrent frame deregistration, as trying the deregister a frame that is on the unwinding path of a concurrent exception is inherently racy. libgcc/ChangeLog: * unwind-dw2-fde.c: Initialize fde object lazily when the first exception tries to pass through.
next prev parent reply other threads:[~2022-12-16 23:55 UTC|newest] Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top 2022-11-14 13:03 [Bug libstdc++/107675] New: [13 Regression] GCC-13 is significantly slower to startup on C++ programs tnfchris at gcc dot gnu.org 2022-11-14 13:09 ` [Bug libstdc++/107675] " tnfchris at gcc dot gnu.org 2022-11-14 13:20 ` jakub at gcc dot gnu.org 2022-11-14 13:27 ` fw at gcc dot gnu.org 2022-11-14 14:28 ` tnfchris at gcc dot gnu.org 2022-11-14 17:54 ` pinskia at gcc dot gnu.org 2022-11-14 17:55 ` pinskia at gcc dot gnu.org 2022-11-17 11:04 ` tnfchris at gcc dot gnu.org 2022-11-17 11:15 ` fw at gcc dot gnu.org 2022-11-17 11:24 ` tnfchris at gcc dot gnu.org 2022-11-20 23:57 ` tnfchris at gcc dot gnu.org 2022-11-20 23:58 ` [Bug libgcc/107675] " tnfchris at gcc dot gnu.org 2022-11-22 9:35 ` tnfchris at gcc dot gnu.org 2022-12-09 19:56 ` [Bug libgcc/107675] [13 Regression] GCC-13 is significantly slower to startup on C++ statically linked programs m.cencora at gmail dot com 2022-12-09 22:46 ` tneumann at users dot sourceforge.net 2022-12-16 23:55 ` tneumann at users dot sourceforge.net [this message] 2022-12-20 15:38 ` rguenth at gcc dot gnu.org
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=bug-107675-4-vAgPJxjroo@http.gcc.gnu.org/bugzilla/ \ --to=gcc-bugzilla@gcc.gnu.org \ --cc=gcc-bugs@gcc.gnu.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).