public inbox for gdb-prs@sourceware.org
help / color / mirror / Atom feed
From: "vries at gcc dot gnu.org" <sourceware-bugzilla@sourceware.org>
To: gdb-prs@sourceware.org
Subject: [Bug gdb/31832] [gdb] FAIL: gdb.threads/attach-many-short-lived-threads.exp: iter 3: attach (timeout)
Date: Fri, 21 Jun 2024 10:18:40 +0000	[thread overview]
Message-ID: <bug-31832-4717-kFck64MLUG@http.sourceware.org/bugzilla/> (raw)
In-Reply-To: <bug-31832-4717@http.sourceware.org/bugzilla/>

https://sourceware.org/bugzilla/show_bug.cgi?id=31832

--- Comment #8 from Tom de Vries <vries at gcc dot gnu.org> ---
(In reply to Thiago Jung Bauermann from comment #7)
> My machine has many cores but it's an older CPU model (Neoverse N1). These
> numbers show that the POWER10 system has a much higher capacity to churn
> out new threads than my system (no surprise there).  My understanding is
> that GDB is overwhelmed by the constant stream of newly spawned threads
> and takes a while to attach to all of them.
> 

Agreed.

[ FYI, one particular thing about my setup is that I build at -O0.  I just
tried at -O2, but I still run into this problem. ]

> As Pedro mentioned elsewhere¹, Linux doesn't provide a way for GDB to stop
> all of a process' threads, or cause new ones to spawn in a
> "ptrace-stopped" state. Without such mechanism, the only way I can see of
> addressing this problem is by making GDB parallelize the job of attaching
> to all inferior threads using its worker threads — i.e., fight fire with
> fire. :)
> 
> That wouldn't be a trivial change though. IIUC it would mean that
> different inferior threads would have different tracers (the various GDB
> worker threads), and GDB would need to take care to use the correct worker
> thread to send ptrace commands to each inferior thread.
> 

One approach could be to have the gdb main thread do only the ptrace bit and
offload the rest of the loop in another thread.  But I'm not sure if that
actually addresses the bottleneck.

> Another approach would be to see if there's a way to make
> attach_proc_task_lwp_callback () faster, but from reading the code it
> doesn't look like there's anything too slow there — except perhaps the
> call to linux_proc_pid_is_gone (), which reads /proc/$LWP/status. Though
> even that would be just mitigation since the fundamental limitation would
> still be there.
> 

I've played around a bit with this for half a day or so, but didn't get
anywhere.

> Alternatively, (considering that the testcase is contrived) can the
> testcase increase the timeout proportionally to the number of CPUs on the
> system?
> 

Or conversely, put a proportional limit on the number threads in the test-case.

> > dir_entries: 594518^M
> > no_lwp: 4412^M
> > lookup: 119037^M
> > skipped: 118355^M
> > insert: 682^M
> > attach: 471751^M
> > start_over: 2091^M
> > Cannot attach to lwp 2340832: Operation not permitted (1)^M
> > ...
> >
> > I'm not sure what this means, but I do notice the big difference between
> > dir_entries and lookup.  So only 20% of the time we find the starttime and
> > can use the cache.
> 
> I thought that not being able to read starttime from /proc meant that the
> thread was gone. But from the statistics I pasted above, in about 34% of
> the time GDB didn't find the starttime and still was able to attach to all
> but one of the new threads. My understanding is that there's a race
> condition between GDB and the Linux kernel when reading the stat file for
> a newly created thread.
> 
> This is harmless though: if starttime can't be obtained, GDB will try to
> attach to the thread anyway.
> 

Agreed, it's harmless (though slow of course).

-- 
You are receiving this mail because:
You are on the CC list for the bug.

      parent reply	other threads:[~2024-06-21 10:18 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-06-01  7:17 [Bug gdb/31832] New: " vries at gcc dot gnu.org
2024-06-01  7:23 ` [Bug gdb/31832] " vries at gcc dot gnu.org
2024-06-01  8:10 ` vries at gcc dot gnu.org
2024-06-02 12:36 ` bernd.edlinger at hotmail dot de
2024-06-03 16:58 ` vries at gcc dot gnu.org
2024-06-07  0:13 ` thiago.bauermann at linaro dot org
2024-06-12  9:53 ` vries at gcc dot gnu.org
2024-06-21  1:21 ` thiago.bauermann at linaro dot org
2024-06-21 10:18 ` vries at gcc dot gnu.org [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bug-31832-4717-kFck64MLUG@http.sourceware.org/bugzilla/ \
    --to=sourceware-bugzilla@sourceware.org \
    --cc=gdb-prs@sourceware.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).