public inbox for elfutils@sourceware.org
 help / color / mirror / Atom feed
From: Mark Wielaard <mark@klomp.org>
To: Romain GEISSLER <romain.geissler@amadeus.com>
Cc: "elfutils-devel@sourceware.org" <elfutils-devel@sourceware.org>,
	Francois RIGAULT <francois.rigault@amadeus.com>
Subject: Re: Performance issue with systemd-coredump and container process linking 2000 shared libraries.
Date: Wed, 21 Jun 2023 21:39:48 +0200	[thread overview]
Message-ID: <20230621193948.GP24233@gnu.wildebeest.org> (raw)
In-Reply-To: <D09215AD-FB87-4E60-BA28-DA43192D88BD@amadeus.com>

Hi Romain,

On Wed, Jun 21, 2023 at 06:14:27PM +0000, Romain GEISSLER wrote:
> > Le 21 juin 2023 à 18:24, Mark Wielaard <mark@klomp.org> a écrit :
> > 
> > Seeing those performance results I understand why you were suspecting
> > the linked list data structure used in elf_getdata_rawchunk.
> > 
> > Would you be able to test a rebuild libelf with the attached patch,
> > which replaces that datastructure with a binary search tree?
> > 
> > It didn't really show much speedup locally (in the noise, maybe 0.01
> > sec faster on ~0.25 sec run). But if there are more than 2000 calls to
> > elf_getdata_rawchunk it should make things faster.
> 
> Actually I have spent some time today to reproduce the issue for
> real, using directly systemd. The details can be found in
> https://github.com/systemd/systemd/pull/28093 which was just
> merged. This change in systemd is enough to fix the biggest part of
> the "slow" parsing of cores. Yet even with my systemd patch, we will
> still call elf_getdata_rawchunk 2000 times if the crashing process
> had 2000 shared libraries, so indeed your patch in elfutils might
> still be welcome. I will test it later when I go home, or tomorrow.

That patch looks good. It should reduce the number of calls to
elf_getdata_rawchunk a lot. Making it less urgent that function is
fast. But if you could test it that would be appreciated. I'll also
test it a bit more and will probably merge it if no issues show up
since it does seem better than keep using the linear list.

Thanks,

Mark

  reply	other threads:[~2023-06-21 19:39 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-14 13:39 Romain Geissler
2023-06-19 15:08 ` Mark Wielaard
2023-06-19 19:56   ` Romain GEISSLER
2023-06-19 20:54     ` Luca Boccassi
2023-06-20 11:59       ` Mark Wielaard
2023-06-20 12:12         ` Luca Boccassi
2023-06-20 13:15     ` Mark Wielaard
2023-06-20 21:37   ` Mark Wielaard
2023-06-20 22:05     ` Romain GEISSLER
2023-06-21 16:24       ` Mark Wielaard
2023-06-21 18:14         ` Romain GEISSLER
2023-06-21 19:39           ` Mark Wielaard [this message]
2023-06-22  8:10             ` Romain GEISSLER
2023-06-22 12:03               ` Mark Wielaard
2023-06-27 14:09         ` Mark Wielaard
2023-06-30 16:09           ` Romain GEISSLER
2023-06-30 22:35             ` Mark Wielaard
2023-06-22  2:40       ` Frank Ch. Eigler
2023-06-22 10:59         ` Mark Wielaard

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230621193948.GP24233@gnu.wildebeest.org \
    --to=mark@klomp.org \
    --cc=elfutils-devel@sourceware.org \
    --cc=francois.rigault@amadeus.com \
    --cc=romain.geissler@amadeus.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).