public inbox for gdb-patches@sourceware.org
 help / color / mirror / Atom feed
* Parallelization of dwarf2 symtab parsing
@ 2022-12-21 18:46 Matheus Branco Borella
  2022-12-21 20:19 ` Tom Tromey
  0 siblings, 1 reply; 2+ messages in thread
From: Matheus Branco Borella @ 2022-12-21 18:46 UTC (permalink / raw)
  To: gdb-patches

I was wondering if there's anything design-wise blocking running the calls to
`dw2_expand_symtabs_matching_one` made in
`cooked_index_functions::expand_symtabs_matching`
in parallel, using a similar approach to what building the psymtabs currently
uses. I tried my hand at it and there doesn't seem to be anything insurmountable
about it, though I haven't been able to get it to work yet.

Doing that naively, though, there are a few trouble variables and structures
that cause data races. The ones I've found are:
    (1) per_objfile->queue,
(2) per_objfile->sym_cu,
(3) per_bfd->dwp_checked,
(4) per_objfile->m_dwarf2_cus, and
(5) the CU cache.

From what I can see:

(1) and (2) can easily be made safe by just making them thread local, seeing as
going down from `dw2_expand_symtabs_matching_one`, they get built fresh and are
only used in the context of that call, before being reset again when it exits.

(3) can also similarly be easily made safe by having it be a `std::once_flag`,
and loading up the DWP in a call to `std::run_once`. Since the flag was only
ever set once to mark the load, and never reset.

(4), though, is where I've hit a bit of a nag. Since it's a global registry of
CUs, all threads must have a shared and coherent view of it. Using a mutex to
lock the map can be done in one of two ways, both with equally undesirable
results:
- We can lock the mutex for the whole lifetime of the parsing of a new CU,
 only unlocking it after the call to `per_objfile->set_cu()`. Obviously,
 the problem with this approach is that it stalls other threads trying to
 parse new CUs.
- We can lock the mutex for just the duration of `per_objfile->set_cu()` and
 let threads parse all CUs as they come in. Problem with this is that there
 will be some amount of rework in the form of multiple calls to
 `load_full_comp_unit`. As, for a given CU 'X', there can now be a window
 of time where a given thread is midway through `load_full_comp_unit(X)`,
 and, because it hasn't called `per_objfile->set_cu(X)` yet, another thread
 can try to load the same CU by calling `load_full_comp_unit(X)` a second
 time, as, from its perspective `per_objfile->get_cu(X) == nullptr`.
As far as solutions to this go, I see three of them:
- We weaken the assertion in `per_objfile->set_cu()` so that, instead of
 checking for whether it exists in the objfile, it instead checks for
 whether the objects are sufficiently similar (for some definition of
 sufficiently similar), before either discarding the new or old CU.
  I'm not familiar with how the lifetimes of these objects are managed, so
I can't say how good of an option this is. Though the possible discarding
issue could maybe be solved by altering the return type and having the
caller give up on its object if someone else has already beaten them to
parsing it faster. This would be the simplest solution to implement.
- We analyze the graph of which CUs cause other CUs to get loaded and run
 the parallel batches in topological order.
   Assuming that we can even know that info. ahead of time, this approach
would still be the most intrusive to the code and the most complex to
pull off.
- We introduce a type that represents a CU that is currently being loaded,
 but that hasn't finished loading quite yet, for threads that need that CU
 to await on.
   This avoids the rework inherent to the first solution, and the need for
dependency info. ahead of time inherent to the second. But also
has the potential to be slower than either of them, seeing as, in the
worst case scenario, some threads can stall multiple times waiting for
CUs to load.

(5) is conceptually pretty simple to understand, but fairly complex to solve. We
can model the CU cache cleanup functionality as a form of age-based garbage
collection that we're trying to parallelize. And there's plenty of fun to be
had by going down that particular rabbit hole. :^)

So I'd like to hear your thoughts and opinions on this. If there's anything I've
missed or got wrong, please let me know.

Thanks,
Matt.

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Parallelization of dwarf2 symtab parsing
  2022-12-21 18:46 Parallelization of dwarf2 symtab parsing Matheus Branco Borella
@ 2022-12-21 20:19 ` Tom Tromey
  0 siblings, 0 replies; 2+ messages in thread
From: Tom Tromey @ 2022-12-21 20:19 UTC (permalink / raw)
  To: Matheus Branco Borella via Gdb-patches; +Cc: Matheus Branco Borella

>>>>> "Matt" == Matheus Branco Borella via Gdb-patches <gdb-patches@sourceware.org> writes:

Matt> I was wondering if there's anything design-wise blocking running the calls to
Matt> `dw2_expand_symtabs_matching_one` made in
Matt> `cooked_index_functions::expand_symtabs_matching`
Matt> in parallel, using a similar approach to what building the psymtabs currently
Matt> uses.

I think it's reasonably difficult, though it might be possible.

Matt> Doing that naively, though, there are a few trouble variables and structures
Matt> that cause data races. The ones I've found are:

Yeah, the biggest problem has to do with data races.  In particular gdb
has some maybe non-obvious spots that can interfere.

First, choosing symbol names in the DWARF reader (the "full" reader, not
the indexer) uses a per-objfile bcache:

  if (need_copy)
    retval = objfile->intern (retval);

(The code to compute symbol names is a horrible mess, I wish we could
clean that up...)

Also any spot in the DWARF reader referring to the objfile obstack is a
potential data race.  According to M-x occur there are 43 of these.
These can probably be changed to some kind of sharded obstack to avoid
interference.

There may also be problems with the buildsym code; and of course when a
symtab is actually installed, that must be made thread-safe.  I tried to
make buildsym less reliant on global variables a while ago, though I
don't recall if that was a 100% success.  (Note there's a "legacy" API
that uses globals, but the DWARF reader avoids this.)

Matt> (4) per_objfile->m_dwarf2_cus, and

Matt> (4), though, is where I've hit a bit of a nag. Since it's a global registry of
Matt> CUs, all threads must have a shared and coherent view of it. Using a mutex to
Matt> lock the map can be done in one of two ways, both with equally undesirable
Matt> results:
...
Matt> - We analyze the graph of which CUs cause other CUs to get loaded and run
Matt>  the parallel batches in topological order.
Matt>    Assuming that we can even know that info. ahead of time, this approach
Matt> would still be the most intrusive to the code and the most complex to
Matt> pull off.
Matt> - We introduce a type that represents a CU that is currently being loaded,
Matt>  but that hasn't finished loading quite yet, for threads that need that CU
Matt>  to await on.

Basically, I think you just want to make sure that a given CU is only
parsed by a single thread.  It's better to arrange things to avoid doing
any extra work here.  I don't think this should be super hard to do, for
example you can have a map that is locked only on use that maps from the
dwarf2_per_cu_data to a std::future that will hold the resulting symtab
or whatever.

Then, when multiple CUs refer to some other CU, whichever thread gets to
that included CU first will win.  I think the data structures should be
set up such that these can be stitched together again after parsing.
It's probably fine to just do this in a post-pass that waits for all the
futures and then sets up the inclusions.

The cooked indexer uses a strategy like this.

Also note that cross-CU references are relatively rare, basically only
occurring when 'dwz' has been used.  So this situation isn't extremely
important for normal users.

Matt> (5) is conceptually pretty simple to understand, but fairly complex to solve. We
Matt> can model the CU cache cleanup functionality as a form of age-based garbage
Matt> collection that we're trying to parallelize. And there's plenty of fun to be
Matt> had by going down that particular rabbit hole. :^)

We have some evidence that this cache is just not very good:

    https://sourceware.org/bugzilla/show_bug.cgi?id=25703

Changing it radically to work in some other way seems totally fine.
Like, I think if the DIEs from some CU are ever needed for a cross-CU
reference, then just keeping those around for the duration of the
current "expansion operation" is fine.

Matt> So I'd like to hear your thoughts and opinions on this. If there's anything I've
Matt> missed or got wrong, please let me know.

A different idea is to try to somewhat merge the partial (cooked index)
DWARF reader and the full reader, and do lazy expansion of CUs.  This
has the same end goal -- speed up CU expansion.

    https://sourceware.org/bugzilla/show_bug.cgi?id=29398

The idea here is that, even when some data from a CU is needed (like a
type, or if gdb stops in some function in the CU), frequently the rest
of the data there is not needed.  So, reading it all is needlessly slow.

There's a second idea here as well, which is unifying the top-level
names between the index and the symtab -- occasionally we've had
divergences here, which result in weird bugs.

The latter could be done without full laziness, though, by using the
index to build the symtab but then filling in the details using the
current code.

The big benefit of lazy expansion is that it would be much faster for
the pathologically large CUs that do sometimes appear (whereas with
parallel reading, a really big CU will still be slow).  The main
drawback is that it's more complicated, so there's more chance for bugs,
and the bugs will be harder to understand when they do occur.

Tom

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-12-21 20:21 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-21 18:46 Parallelization of dwarf2 symtab parsing Matheus Branco Borella
2022-12-21 20:19 ` Tom Tromey

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).