public inbox for gdb-cvs@sourceware.org
help / color / mirror / Atom feed
* [binutils-gdb] Re-add zombie leader on exit, gdb/linux
@ 2022-03-10 11:42 Pedro Alves
  0 siblings, 0 replies; only message in thread
From: Pedro Alves @ 2022-03-10 11:42 UTC (permalink / raw)
  To: gdb-cvs

https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=6cf20c46e37486b16b565046025c34b2e633dd33

commit 6cf20c46e37486b16b565046025c34b2e633dd33
Author: Pedro Alves <pedro@palves.net>
Date:   Mon Feb 21 20:07:20 2022 +0000

    Re-add zombie leader on exit, gdb/linux
    
    The current zombie leader detection code in linux-nat.c has a race --
    if a multi-threaded inferior exits just before check_zombie_leaders
    finds that the leader is now zombie via checking /proc/PID/status,
    check_zombie_leaders deletes the leader, assuming we won't get an
    event for that exit (which we won't in some scenarios, but not in this
    one).  That might seem mostly harmless, but it has some downsides:
    
     - later when we continue pulling events out of the kernel, we will
       collect the exit event of the non-leader threads, and once we see
       the last lwp in our list exit, we return _that_ lwp's exit code as
       whole-process exit code to infrun, instead of the leader's exit
       code.
    
     - this can cause a hang in stop_all_threads in infrun.c.  Say there
       are 2 threads in the process.  stop_all_threads stops each of those
       threads, and then waits for two stop or exit events, one for each
       thread.  If the whole process exits, and check_zombie_leaders hits
       the false-positive case, linux-nat.c will only return one event to
       GDB (the whole-process exit returned when we see the last thread,
       the non-leader thread, exit), making stop_all_threads hang forever
       waiting for a second event that will never come.
    
    However, in this false-positive scenario, where the whole process is
    exiting, as opposed to just the leader (with pthread_exit(), for
    example), we _will_ get an exit event shortly for the leader, after we
    collect the exit event of all the other non-leader threads.  Or put
    another way, we _always_ get an event for the leader after we see it
    become zombie.
    
    I tried a number of approaches to fix this:
    
    #1 - My first thought to address the race was to make GDB always
    report the whole-process exit status for the leader thread, not for
    whatever is the last lwp in the list.  We _always_ get a final exit
    (or exec) event for the leader, and when the race triggers, we're not
    collecting it.
    
    #2 - My second thought was to try to plug the race in the first place.
    
    I thought of making GDB call waitpid/WNOHANG for all non-leader
    threads immediately when the zombie leader is detected, assuming there
    would be an exit event pending for each of them waiting to be
    collected.  Turns out that that doesn't work -- you can see the leader
    become zombie _before_ the kernel kills all other threads.  Waitpid in
    that small time window returns 0, indicating no-event.  Thankfully we
    hit that race window all the time, which avoided trading one race for
    another.  Looking at the non-leader thread's status in /proc doesn't
    help either, the threads are still in running state for a bit, for the
    same reason.
    
    #3 - My next attempt, which seemed promising, was to synchronously
    stop and wait for the stop for each of the non-leader threads.  For
    the scenario in question, this will collect all the exit statuses of
    the non-leader threads.  Then, if we are left with only the zombie
    leader in the lwp list, it means we either have a normal while-process
    exit or an exec, in which case we should not delete the leader.  If
    _only_ the leader exited, like in gdb.threads/leader-exit.exp, then
    after pausing threads, we will still have at least one live non-leader
    thread in the list, and so we delete the leader lwp.  I got this
    working and polished, and it was only after staring at the kernel code
    to convince myself that this would really work (and it would, for the
    scenario I considered), that I realized I had failed to account for
    one scenario -- if any non-leader thread is _already_ stopped when
    some thread triggers a group exit, like e.g., if you have some threads
    stopped and then resume just one thread with scheduler-locking or
    non-stop, and that thread exits the process.  I also played with
    PTRACE_EVENT_EXIT, see if it would help in any way to plug the race,
    and I couldn't find a way that it would result in any practical
    difference compared to looking at /proc/PID/status, with respect to
    having a race.
    
    So I concluded that there's no way to plug the race, we just have to
    deal with it.  Which means, going back to approach #1.  That is the
    approach taken by this patch.
    
    Change-Id: I6309fd4727da8c67951f9cea557724b77e8ee979

Diff:
---
 gdb/linux-nat.c | 107 ++++++++++++++++++++++++++++++++++++++++++--------------
 1 file changed, 80 insertions(+), 27 deletions(-)

diff --git a/gdb/linux-nat.c b/gdb/linux-nat.c
index 1555d3a79e3..d97a770bf83 100644
--- a/gdb/linux-nat.c
+++ b/gdb/linux-nat.c
@@ -247,6 +247,14 @@ static void save_stop_reason (struct lwp_info *lp);
 static void close_proc_mem_file (pid_t pid);
 static void open_proc_mem_file (ptid_t ptid);
 
+/* Return TRUE if LWP is the leader thread of the process.  */
+
+static bool
+is_leader (lwp_info *lp)
+{
+  return lp->ptid.pid () == lp->ptid.lwp ();
+}
+
 \f
 /* LWP accessors.  */
 
@@ -2814,7 +2822,23 @@ linux_nat_filter_event (int lwpid, int status)
 	  /* Don't report an event for the exit of an LWP not in our
 	     list, i.e. not part of any inferior we're debugging.
 	     This can happen if we detach from a program we originally
-	     forked and then it exits.  */
+	     forked and then it exits.  However, note that we may have
+	     earlier deleted a leader of an inferior we're debugging,
+	     in check_zombie_leaders.  Re-add it back here if so.  */
+	  for (inferior *inf : all_inferiors (linux_target))
+	    {
+	      if (inf->pid == lwpid)
+		{
+		  linux_nat_debug_printf
+		    ("Re-adding thread group leader LWP %d after exit.",
+		     lwpid);
+
+		  lp = add_lwp (ptid_t (lwpid, lwpid));
+		  lp->resumed = 1;
+		  add_thread (linux_target, lp->ptid);
+		  break;
+		}
+	    }
 	}
 
       if (lp == nullptr)
@@ -2865,13 +2889,12 @@ linux_nat_filter_event (int lwpid, int status)
   /* Check if the thread has exited.  */
   if (WIFEXITED (status) || WIFSIGNALED (status))
     {
-      if (!report_thread_events
-	  && num_lwps (lp->ptid.pid ()) > 1)
+      if (!report_thread_events && !is_leader (lp))
 	{
 	  linux_nat_debug_printf ("%s exited.",
 				  lp->ptid.to_string ().c_str ());
 
-	  /* If there is at least one more LWP, then the exit signal
+	  /* If this was not the leader exiting, then the exit signal
 	     was not the end of the debugged application and should be
 	     ignored.  */
 	  exit_lwp (lp);
@@ -3014,33 +3037,63 @@ check_zombie_leaders (void)
       leader_lp = find_lwp_pid (ptid_t (inf->pid));
       if (leader_lp != NULL
 	  /* Check if there are other threads in the group, as we may
-	     have raced with the inferior simply exiting.  */
+	     have raced with the inferior simply exiting.  Note this
+	     isn't a watertight check.  If the inferior is
+	     multi-threaded and is exiting, it may be we see the
+	     leader as zombie before we reap all the non-leader
+	     threads.  See comments below.  */
 	  && num_lwps (inf->pid) > 1
 	  && linux_proc_pid_is_zombie (inf->pid))
 	{
+	  /* A zombie leader in a multi-threaded program can mean one
+	     of three things:
+
+	     #1 - Only the leader exited, not the whole program, e.g.,
+	     with pthread_exit.  Since we can't reap the leader's exit
+	     status until all other threads are gone and reaped too,
+	     we want to delete the zombie leader right away, as it
+	     can't be debugged, we can't read its registers, etc.
+	     This is the main reason we check for zombie leaders
+	     disappearing.
+
+	     #2 - The whole thread-group/process exited (a group exit,
+	     via e.g. exit(3), and there is (or will be shortly) an
+	     exit reported for each thread in the process, and then
+	     finally an exit for the leader once the non-leaders are
+	     reaped.
+
+	     #3 - There are 3 or more threads in the group, and a
+	     thread other than the leader exec'd.  See comments on
+	     exec events at the top of the file.
+
+	     Ideally we would never delete the leader for case #2.
+	     Instead, we want to collect the exit status of each
+	     non-leader thread, and then finally collect the exit
+	     status of the leader as normal and use its exit code as
+	     whole-process exit code.  Unfortunately, there's no
+	     race-free way to distinguish cases #1 and #2.  We can't
+	     assume the exit events for the non-leaders threads are
+	     already pending in the kernel, nor can we assume the
+	     non-leader threads are in zombie state already.  Between
+	     the leader becoming zombie and the non-leaders exiting
+	     and becoming zombie themselves, there's a small time
+	     window, so such a check would be racy.  Temporarily
+	     pausing all threads and checking to see if all threads
+	     exit or not before re-resuming them would work in the
+	     case that all threads are running right now, but it
+	     wouldn't work if some thread is currently already
+	     ptrace-stopped, e.g., due to scheduler-locking.
+
+	     So what we do is we delete the leader anyhow, and then
+	     later on when we see its exit status, we re-add it back.
+	     We also make sure that we only report a whole-process
+	     exit when we see the leader exiting, as opposed to when
+	     the last LWP in the LWP list exits, which can be a
+	     non-leader if we deleted the leader here.  */
 	  linux_nat_debug_printf ("Thread group leader %d zombie "
-				  "(it exited, or another thread execd).",
+				  "(it exited, or another thread execd), "
+				  "deleting it.",
 				  inf->pid);
-
-	  /* A leader zombie can mean one of two things:
-
-	     - It exited, and there's an exit status pending
-	     available, or only the leader exited (not the whole
-	     program).  In the latter case, we can't waitpid the
-	     leader's exit status until all other threads are gone.
-
-	     - There are 3 or more threads in the group, and a thread
-	     other than the leader exec'd.  See comments on exec
-	     events at the top of the file.  We could try
-	     distinguishing the exit and exec cases, by waiting once
-	     more, and seeing if something comes out, but it doesn't
-	     sound useful.  The previous leader _does_ go away, and
-	     we'll re-add the new one once we see the exec event
-	     (which is just the same as what would happen if the
-	     previous leader did exit voluntarily before some other
-	     thread execs).  */
-
-	  linux_nat_debug_printf ("Thread group leader %d vanished.", inf->pid);
 	  exit_lwp (leader_lp);
 	}
     }
@@ -3057,7 +3110,7 @@ filter_exit_event (struct lwp_info *event_child,
 {
   ptid_t ptid = event_child->ptid;
 
-  if (num_lwps (ptid.pid ()) > 1)
+  if (!is_leader (event_child))
     {
       if (report_thread_events)
 	ourstatus->set_thread_exited (0);


^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2022-03-10 11:42 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-10 11:42 [binutils-gdb] Re-add zombie leader on exit, gdb/linux Pedro Alves

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).