public inbox for gcc-bugs@sourceware.org
help / color / mirror / Atom feed
From: "hubicka at ucw dot cz" <gcc-bugzilla@gcc.gnu.org>
To: gcc-bugs@gcc.gnu.org
Subject: [Bug gcov-profile/99105] profile streaming scales poorly to projects with many source files
Date: Mon, 15 Feb 2021 14:19:54 +0000	[thread overview]
Message-ID: <bug-99105-4-Aov3h9baPH@http.gcc.gnu.org/bugzilla/> (raw)
In-Reply-To: <bug-99105-4@http.gcc.gnu.org/bugzilla/>

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=99105

--- Comment #3 from Jan Hubicka <hubicka at ucw dot cz> ---
> A small improvement can be achieved by the removal of libgcov I/O buffering:
> https://gcc.gnu.org/git/?p=gcc.git;a=patch;h=5a17015c096012b9e43a8dd45768a8d5fb3a3aee

So it effectively replaces gcov's own buffered I/O by stdio.  First I am
not sure how safe it is (as we had a lot of fun about using malloc) and
also it adds dependency on stdio that is not necessarily good idea for
embedded targets. Not sure how often it is used there.

But why glibc stdio is more effective? Is it because our buffer size of
1k is way too small (as it seems juding from the profile that is
dominated by fread calls rather than open/lock/close)?
> 
> But the key thing is likely the ability to omit profile modifications
> (read/modify/write) for parts of a binary that are not trained.
Problem there are the per-program summaries that needs to be updated
even for files never visited.

It seems that producing one file with tar-like format that can be
expanded to gcda files by gcov-tool would be good idea. Even if we need
to lock whole file it is probably faster than a lot of small I/Os.
To avoid waiting for lock one can simply allow multiple profile files to
be created and teach libgcov to acquire unlocked file in pseudorandom
order.

Honza

  parent reply	other threads:[~2021-02-15 14:19 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-15 13:46 [Bug gcov-profile/99105] New: " hubicka at gcc dot gnu.org
2021-02-15 13:56 ` [Bug gcov-profile/99105] " hubicka at gcc dot gnu.org
2021-02-15 13:58 ` marxin at gcc dot gnu.org
2021-02-15 14:19   ` Jan Hubicka
2021-02-15 13:59 ` marxin at gcc dot gnu.org
2021-02-15 14:19 ` hubicka at ucw dot cz [this message]
2021-02-15 14:38 ` marxin at gcc dot gnu.org
2021-02-15 14:50 ` hubicka at ucw dot cz
2021-02-15 14:56 ` marxin at gcc dot gnu.org
2021-02-15 15:17 ` hubicka at ucw dot cz
2021-02-15 15:18 ` marxin at gcc dot gnu.org
2021-02-15 15:21   ` Jan Hubicka
2021-02-15 15:21 ` hubicka at ucw dot cz
2021-02-15 15:21 ` marxin at gcc dot gnu.org
2021-02-15 15:23 ` marxin at gcc dot gnu.org
2021-02-15 15:32   ` Jan Hubicka
2021-02-15 15:32 ` hubicka at ucw dot cz
2021-02-15 16:12 ` hubicka at ucw dot cz
2021-02-15 20:20 ` [Bug gcov-profile/99105] [11 regression] " hubicka at gcc dot gnu.org
2021-02-15 20:40 ` hubicka at gcc dot gnu.org
2021-02-16 10:31 ` marxin at gcc dot gnu.org
2021-02-16 13:57 ` marxin at gcc dot gnu.org
2021-03-04 15:22 ` cvs-commit at gcc dot gnu.org
2021-03-04 15:23 ` marxin at gcc dot gnu.org

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bug-99105-4-Aov3h9baPH@http.gcc.gnu.org/bugzilla/ \
    --to=gcc-bugzilla@gcc.gnu.org \
    --cc=gcc-bugs@gcc.gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).