public inbox for gdb-patches@sourceware.org
 help / color / mirror / Atom feed
From: Tom Tromey <tom@tromey.com>
To: Matheus Branco Borella via Gdb-patches <gdb-patches@sourceware.org>
Cc: Matheus Branco Borella <mbrla@usp.br>
Subject: Re: Parallelization of dwarf2 symtab parsing
Date: Wed, 21 Dec 2022 13:19:30 -0700	[thread overview]
Message-ID: <87bknwik6l.fsf@tromey.com> (raw)
In-Reply-To: <CA+o+VZsWx7MivHVQG5ccrOLwikDM5YXuMY6wQ70xfvG=P_h8Pw@mail.gmail.com> (Matheus Branco Borella via Gdb-patches's message of "Wed, 21 Dec 2022 15:46:59 -0300")

>>>>> "Matt" == Matheus Branco Borella via Gdb-patches <gdb-patches@sourceware.org> writes:

Matt> I was wondering if there's anything design-wise blocking running the calls to
Matt> `dw2_expand_symtabs_matching_one` made in
Matt> `cooked_index_functions::expand_symtabs_matching`
Matt> in parallel, using a similar approach to what building the psymtabs currently
Matt> uses.

I think it's reasonably difficult, though it might be possible.

Matt> Doing that naively, though, there are a few trouble variables and structures
Matt> that cause data races. The ones I've found are:

Yeah, the biggest problem has to do with data races.  In particular gdb
has some maybe non-obvious spots that can interfere.

First, choosing symbol names in the DWARF reader (the "full" reader, not
the indexer) uses a per-objfile bcache:

  if (need_copy)
    retval = objfile->intern (retval);

(The code to compute symbol names is a horrible mess, I wish we could
clean that up...)

Also any spot in the DWARF reader referring to the objfile obstack is a
potential data race.  According to M-x occur there are 43 of these.
These can probably be changed to some kind of sharded obstack to avoid
interference.

There may also be problems with the buildsym code; and of course when a
symtab is actually installed, that must be made thread-safe.  I tried to
make buildsym less reliant on global variables a while ago, though I
don't recall if that was a 100% success.  (Note there's a "legacy" API
that uses globals, but the DWARF reader avoids this.)

Matt> (4) per_objfile->m_dwarf2_cus, and

Matt> (4), though, is where I've hit a bit of a nag. Since it's a global registry of
Matt> CUs, all threads must have a shared and coherent view of it. Using a mutex to
Matt> lock the map can be done in one of two ways, both with equally undesirable
Matt> results:
...
Matt> - We analyze the graph of which CUs cause other CUs to get loaded and run
Matt>  the parallel batches in topological order.
Matt>    Assuming that we can even know that info. ahead of time, this approach
Matt> would still be the most intrusive to the code and the most complex to
Matt> pull off.
Matt> - We introduce a type that represents a CU that is currently being loaded,
Matt>  but that hasn't finished loading quite yet, for threads that need that CU
Matt>  to await on.

Basically, I think you just want to make sure that a given CU is only
parsed by a single thread.  It's better to arrange things to avoid doing
any extra work here.  I don't think this should be super hard to do, for
example you can have a map that is locked only on use that maps from the
dwarf2_per_cu_data to a std::future that will hold the resulting symtab
or whatever.

Then, when multiple CUs refer to some other CU, whichever thread gets to
that included CU first will win.  I think the data structures should be
set up such that these can be stitched together again after parsing.
It's probably fine to just do this in a post-pass that waits for all the
futures and then sets up the inclusions.

The cooked indexer uses a strategy like this.

Also note that cross-CU references are relatively rare, basically only
occurring when 'dwz' has been used.  So this situation isn't extremely
important for normal users.

Matt> (5) is conceptually pretty simple to understand, but fairly complex to solve. We
Matt> can model the CU cache cleanup functionality as a form of age-based garbage
Matt> collection that we're trying to parallelize. And there's plenty of fun to be
Matt> had by going down that particular rabbit hole. :^)

We have some evidence that this cache is just not very good:

    https://sourceware.org/bugzilla/show_bug.cgi?id=25703

Changing it radically to work in some other way seems totally fine.
Like, I think if the DIEs from some CU are ever needed for a cross-CU
reference, then just keeping those around for the duration of the
current "expansion operation" is fine.

Matt> So I'd like to hear your thoughts and opinions on this. If there's anything I've
Matt> missed or got wrong, please let me know.

A different idea is to try to somewhat merge the partial (cooked index)
DWARF reader and the full reader, and do lazy expansion of CUs.  This
has the same end goal -- speed up CU expansion.

    https://sourceware.org/bugzilla/show_bug.cgi?id=29398

The idea here is that, even when some data from a CU is needed (like a
type, or if gdb stops in some function in the CU), frequently the rest
of the data there is not needed.  So, reading it all is needlessly slow.

There's a second idea here as well, which is unifying the top-level
names between the index and the symtab -- occasionally we've had
divergences here, which result in weird bugs.

The latter could be done without full laziness, though, by using the
index to build the symtab but then filling in the details using the
current code.

The big benefit of lazy expansion is that it would be much faster for
the pathologically large CUs that do sometimes appear (whereas with
parallel reading, a really big CU will still be slow).  The main
drawback is that it's more complicated, so there's more chance for bugs,
and the bugs will be harder to understand when they do occur.

Tom

      reply	other threads:[~2022-12-21 20:21 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-21 18:46 Matheus Branco Borella
2022-12-21 20:19 ` Tom Tromey [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87bknwik6l.fsf@tromey.com \
    --to=tom@tromey.com \
    --cc=gdb-patches@sourceware.org \
    --cc=mbrla@usp.br \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).