public inbox for rda@sourceware.org
 help / color / mirror / Atom feed
* NPTL work committed to jimb-rda-nptl-branch
@ 2004-11-23  6:16 Jim Blandy
  2004-12-02 20:15 ` Daniel Jacobowitz
  0 siblings, 1 reply; 5+ messages in thread
From: Jim Blandy @ 2004-11-23  6:16 UTC (permalink / raw)
  To: rda


With the following commit, RDA (on the branch) now supports NPTL on
the i386.  It seems to handle linux-bp and manythreads pretty well
(although the latter test is broken on remote targets).

rda/unix/ChangeLog:
2004-11-23  Jim Blandy  <jimb@redhat.com>

	Separate management of kernel-level LWPs from that of libpthread /
	libthread_db-level threads.
	* lwp-pool.c, lwp-pool.h: New files.
	* thread-db.c: #include "lwp-ctrl.h" and "lwp-pool.h".
	(struct gdbserv_thread): Delete members 'attached', 'stopped',
	'waited', and 'stepping'.  This structure is now just a
	'td_thrinfo_t' and a list link.  Describe some quirks in the
	meanings of certain 'ti' fields.
	(thread_list_lookup_by_lid): Move later in file, so we can use
	information directly from our proc handle.  Be skeptical of ZOMBIE
	or UNKNOWN threads whose LWP ID is equal to the PID in the proc
	handle.
	(thread_debug_name): Move later in file, so we can use
	thread_db_state_str.
	(attach_thread): Use lwp pool functions to attach.  Attach to
	zombies.  When using signal-based communication, send the thread
	the restart signal immediately.
	(find_new_threads_callback): Go ahead and attach to all threads.
	The LWP pool functions tolerate attaching to a given LWP more than
	once.
	(update_thread_list): Take the process as an argument.  If the
	focus thread has disappeared, set process->focus_thread to NULL.
	(thread_db_thread_next): Pass the process to update_thread_list.
	(stop_thread, stop_all_threads, add_pending_event,
	delete_pending_event, select_pending_event, send_pending_signals,
	wait_all_threads, continue_all_threads): Deleted.
	(handle_thread_db_event): Renamed from handle_thread_db_events.
	Take the process structure as an argument, and check only for a
	thread-db event notification from process->event_thread.  Use LWP
	pool functions.
	(continue_thread, singlestep_thread): Use LWP pool functions.
	(thread_db_continue_program, thread_db_singlestep_program,
	thread_db_continue_thread, thread_db_singlestep_thread): Use LWP
	pool functions, and update process->focus_thread appropriately.
	(thread_db_check_child_state): Use the LWP pool functions.  Rather
	than stopping all LWP's, choosing the most interesting events, and
	then arranging to re-create all the other wait statuses we got,
	just pick the first event we get from lwp_pool_waitpid (either on
	the focus thread, if there is one, or on any thread) and report
	that.  Use the new handle_thread_db_event function.
	(struct event_list, pending_events, pending_events_listsize,
	pending_events_top): Deleted; replaced by LWP pool code.
	(thread_db_attach): Tell the LWP pool about the PID we're
	attaching to.  Clear the focus thread.
	* server.h (struct process): New member: 'focus_thread'.
	* gdbserv-thread-db.h (continue_lwp, singlestep_lwp, attach_lwp,
	stop_lwp): Move declarations from here...
	* lwp-ctrl.h: ... to here.  New file.
	(kill_lwp): Renamed from stop_lwp; allow caller to specify any
	signal.
	* ptrace-target.c: #include "lwp-ctrl.h".
	(continue_lwp, singlestep_lwp, attach_lwp, stop_lwp): Move
	function comments to lwp-ctrl.h, and expand.
	* configure.in: Whenever we select 'thread-db.o', select
	'lwp-pool.o' as well.
	* configure: Regenerated.

	* thread-db.c (thread_db_check_child_state): Remove extraneous
	call to handle_waitstatus.  Remove extra check for exited main
	thread.
	
	* thread-db.c (thread_db_thread_info): List the type and state
	before the PID, and mention whether the LWP's PID is equal to that
	of the main thread, since ZOMBIE and UNKNOWN threads whose LWP's
	PID is equal are probably actually exited threads.
 	
	* thread-db.c (add_thread_to_list): Zero out entire structure.

	* thread-db.c (thread_db_state_str, thread_db_type_str): Remove
	spaces from names; we don't always want them, and the caller can
	use printf formatting directives to arrange things as they please.

	* ptrace-target.c (continue_lwp, singlestep_lwp, attach_lwp,
	stop_lwp): Change arguments from 'lwpid_t' to 'pid_t'.  lwpid_t is
	strictly a thread-db type; these are functions that use system
	calls, which all expect pid_t.  Rename arguments from 'lwpid' to
	'pid'.

	* ptrace-target.c: #define _GNU_SOURCE to get declaration for
	strsignal.
	(kill_lwp): Enhance error reporting.

Index: rda/unix/lwp-pool.h
===================================================================
RCS file: rda/unix/lwp-pool.h
diff -N rda/unix/lwp-pool.h
*** rda/unix/lwp-pool.h	1 Jan 1970 00:00:00 -0000
--- rda/unix/lwp-pool.h	23 Nov 2004 05:52:21 -0000
***************
*** 0 ****
--- 1,173 ----
+ /* lwp-pool.h --- interface to a stoppable, waitable LWP pool.
+ 
+    Copyright 2004 Red Hat, Inc.
+ 
+    This file is part of RDA, the Red Hat Debug Agent (and library).
+ 
+    This program is free software; you can redistribute it and/or modify
+    it under the terms of the GNU General Public License as published by
+    the Free Software Foundation; either version 2 of the License, or
+    (at your option) any later version.
+ 
+    This program is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+    GNU General Public License for more details.
+ 
+    You should have received a copy of the GNU General Public License
+    along with this program; if not, write to the Free Software
+    Foundation, Inc., 59 Temple Place - Suite 330,
+    Boston, MA 02111-1307, USA.
+    
+    Alternative licenses for RDA may be arranged by contacting Red Hat,
+    Inc.  */
+ 
+ #ifndef RDA_UNIX_LWP_POOL_H
+ #define RDA_UNIX_LWP_POOL_H
+ 
+ #include <sys/types.h>
+ #include <sys/wait.h>
+ 
+ struct gdbserv;
+ 
+ /* These functions manage a set of LWPs that you can wait for for á là
+    waitpid, but that you can also stop and continue as a group without
+    disturbing individual threads' wait statuses.
+ 
+    Here we use "LWP" to mean the kernel-level thingy that is running
+    code, and "thread" to mean the POSIX threads / libthread_db-level
+    object.  The Linux kernel calls an 'LWP' a 'thread', which would be
+    confusing.
+ 
+    The LWPs must be either children of the calling process, or
+    processes we have attached to --- something we will hear about when
+    we call 'waitpid'.
+ 
+    We have separate tables for Unix LWP's and libthread_db threads,
+    because even though standard Linux distributions have never used
+    M:N threads, there still isn't a simple 1:1 relationship between
+    them.  Before the thread library has been loaded and initialized
+    itself, you have an LWP with no thread.  After a thread has exited,
+    but before any other thread has joined with it, you can have a
+    thread with no LWP.  Add that to the way libthread_db reports the
+    id of an exited LWP as being equal to ps_getpid (proc_handle), and
+    it becomes worthwhile having a clear separation between the two.  */
+ 
+ 
+ /* Add PID to the LWP pool, assuming that PID is stopped, and the
+    uninteresting wait status has been received (and thrown away).  Use
+    this for a child that has been forked, and where we've waited for
+    the exec SIGSTOP.  */
+ void lwp_pool_new_stopped (pid_t pid);
+ 
+ 
+ /* Attach to PID, add it to the LWP pool, and wait for it to stop.  If
+    PID is already in the pool, do nothing and return 0.  If PID was
+    not already in the pool and we successfully attached to it, return
+    1.  On failure, return -1 and set errno.
+ 
+    If there is an interesting wait status available for PID,
+    lwp_pool_waitpid will report it, but the wait status caused by the
+    attach is handled internally, and will not be reported via
+    lwp_pool_waitpid.  */
+ int lwp_pool_attach (pid_t pid);
+ 
+ 
+ /* Do we need a function for detaching from each LWP in the pool
+    individually?  */
+ 
+ 
+ /* Behave like 'waitpid (PID, STAT_LOC, OPTIONS)', but do not report
+    boring wait statuses --- those due to calls to lwp_pool_attach,
+    lwp_pool_stop_all, etc.
+ 
+    PID must be either -1 (wait for any process) or a positive
+    integer (wait for the process with that specific pid).
+ 
+    The only bit that may be set in OPTIONS is WNOHANG.  We need to
+    monitor the status of all LWP's, so we add __WALL as appropriate.  */
+ pid_t lwp_pool_waitpid (pid_t pid, int *stat_loc, int options);
+ 
+ 
+ /* Stop all running LWP's in the pool.  This function does not return
+    until all LWP's are known to be stopped.
+ 
+    The wait status caused by the stop is handled internally, and will
+    not be reported by lwp_pool_waitpid.  */
+ void lwp_pool_stop_all (void);
+ 
+ 
+ /* Continue all stopped, uninteresting LWP's in the pool.
+    If some of the LWP's have been resumed with lwp_pool_singlestep or
+    lwp_pool_continue, those will be left to continue to run.  */
+ void lwp_pool_continue_all (void);
+ 
+ 
+ /* Continue LWP.  If SIGNAL is non-zero, continue it with signal
+    SIGNAL.  Return zero on success, -1 on failure.  */
+ int lwp_pool_continue_lwp (pid_t pid, int signal);
+ 
+ 
+ /* Continue LWP in SERV for one instruction, delivering SIGNAL if it
+    is non-zero, and stop with SIGSTOP if/when that instruction has
+    been completed.
+ 
+    The SERV argument is there because singlestep_lwp requires it.
+    Inconsistency, bleah.  */
+ int lwp_pool_singlestep_lwp (struct gdbserv *serv, pid_t lwp, int signal);
+ 
+ 
+ /* Under NPTL, LWP's simply disappear, without becoming a zombie or
+    producing any wait status.  At the kernel level, we have no way of
+    knowing that the LWP's PID is now free and may be reused ---
+    perhaps by an entirely different program!  So we need to use the
+    death events from libthread_db to help us keep our LWP table clean.
+ 
+    There are two steps:
+ 
+    - first, the thread sends RDA a libthread_db TD_DEATH event,
+      indicating that it is about to exit.
+ 
+    - then, the thread takes some pre-negotiated action (hitting a
+      breakpoint; making a system call) to notify libthread_db that
+      there are events queued it should attend to.
+ 
+    What's tricky here is that the queueing of the event and the
+    notification are not synchronized.  So RDA could easily receive
+    TD_DEATH events for several threads when the first of those threads
+    performs its notification.  We need to continue to manage the LWPs
+    of the remaining threads whose death is foretold (are there any
+    named Santiago?) until they have completed their notifications.
+ 
+    (And since RDA consumes all the events each time a notification is
+    received, we should be prepared to receive notifications even when
+    the queue is empty.  But that's not our problem here.)
+ 
+    So the LWP pool code has the following two entry points:
+ 
+    - The first indicates that a TD_DEATH event has been received for a
+      given thread, and that once it has completed its notification, we
+      should expect to hear nothing from it again.
+ 
+    - The second indicates that some LWP, whether marked for death or
+      not, has completed its notification.
+ 
+    So when a thread completes its notification, *and* that thread has
+    been marked for death, we should drop it from the LWP pool.  */
+ 
+ 
+ /* Indicate that LWP's death has been foretold by a TD_DEATH message
+    from libthread_db.  Once we are told that it has completed its
+    event notification by a call to lwp_pool_nptl_death_notified, we
+    will forget about LWP entirely.  */
+ void lwp_pool_thread_db_death_event (pid_t lwp);
+ 
+ 
+ /* Indicate that LWP has completed its event notification.  LWP must
+    be currently stopped.  If LWP's death has been fortold by a call to
+    lwp_pool_nptl_death_event, when LWP is continued, we will remove it
+    from the LWP pool and forget about it entirely.  */
+ void lwp_pool_thread_db_death_notified (pid_t lwp);
+ 
+ 
+ #endif /* RDA_UNIX_LWP_POOL_H */
Index: rda/unix/lwp-pool.c
===================================================================
RCS file: rda/unix/lwp-pool.c
diff -N rda/unix/lwp-pool.c
*** rda/unix/lwp-pool.c	1 Jan 1970 00:00:00 -0000
--- rda/unix/lwp-pool.c	23 Nov 2004 05:52:21 -0000
***************
*** 0 ****
--- 1,1468 ----
+ /* lwp-pool.c --- implementation of a stoppable, waitable LWP pool.
+ 
+    Copyright 2004 Red Hat, Inc.
+ 
+    This file is part of RDA, the Red Hat Debug Agent (and library).
+ 
+    This program is free software; you can redistribute it and/or modify
+    it under the terms of the GNU General Public License as published by
+    the Free Software Foundation; either version 2 of the License, or
+    (at your option) any later version.
+ 
+    This program is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+    GNU General Public License for more details.
+ 
+    You should have received a copy of the GNU General Public License
+    along with this program; if not, write to the Free Software
+    Foundation, Inc., 59 Temple Place - Suite 330,
+    Boston, MA 02111-1307, USA.
+    
+    Alternative licenses for RDA may be arranged by contacting Red Hat,
+    Inc.  */
+ 
+ #include "config.h"
+ 
+ #define _GNU_SOURCE /* for strerror */
+ 
+ #include <assert.h>
+ #include <stdlib.h>
+ #include <string.h>
+ #include <stdio.h>
+ #include <errno.h>
+ #include <sys/types.h>
+ #include <sys/wait.h>
+ 
+ #include "lwp-pool.h"
+ #include "lwp-ctrl.h"
+ 
+ static int debug_lwp_pool = 1;
+ 
+ \f
+ /* THE LIFETIME OF A TRACED LWP
+ 
+    POSIX uses these terms in talking about signals:
+ 
+    - To "generate" a signal is to call kill or raise, divide by zero,
+      etc.
+ 
+    - To "deliver" a signal is to do whatever that signal's designated
+      action is: ignore it, enter a signal handler, terminate the
+      process, or stop the process.
+ 
+    - To "accept" a signal is to have 'sigwait' or a similar function
+      select and return the signal.
+ 
+    - A signal is "pending" between the time it is generated and the
+      time it is delivered.
+ 
+    So, here is the life cycle of a traced LWP:
+ 
+    - It is created by fork or vfork and does a PTRACE_TRACEME.  The
+      PTRACE_TRACEME makes it a traced, running LWP.  When a traced LWP
+      does an exec, it gets a SIGTRAP before executing the first
+      instruction in the new process image, so the LWP will then stop.
+ 
+      Or, we attach to it with a PTRACE_ATTACH.  This sends a SIGSTOP
+      to the LWP, so it will stop.
+ 
+    - While a traced LWP is stopped, we can read and write its
+      registers and memory.  We can also send it signals; they become
+      pending on the LWP, and are not delivered or accepted until it is
+      continued.
+ 
+    - A stopped LWP can be set running again in one of two ways:
+ 
+      + by doing a PTRACE_CONT, PTRACE_SINGLESTEP, or PTRACE_SYSCALL; or
+ 
+      + by sending it a SIGCONT.
+ 
+      The ptrace requests all let you specify a signal to be delivered to
+      the process.
+ 
+      Sending a SIGCONT clears any pending SIGSTOPs; PTRACE_CONT and
+      PTRACE_SINGLESTEP don't have that side effect.
+ 
+      (Sending an LWP a SIGKILL via the 'kill' or 'tkill' system calls
+      acts like sending it a SIGKILL followed by a SIGCONT.)
+ 
+    - A running LWP may exit or be terminated by a signal at any time,
+      so accessing its memory or registers or sending it a signal is
+      always a race.
+ 
+    - waitpid will eventually return a status S for a continued LWP:
+ 
+      + If WIFEXITED (S) or WIFSIGNALED (S), the LWP no longer exists.
+      
+      + IF WIFSTOPPED (S), the LWP is stopped again, because some
+        signal WSTOPSIG (S) was about to be delivered to it.  Here we
+        go back to the second step.
+ 
+        Note that the signal WSTOPSIG (S) has not yet been delivered to
+        the process, and is no longer pending on the process.  Only
+        signals passed to the ptrace requests get delivered.  In
+        effect, the debugger gets to intercept signals before they are
+        delivered, and decide whether to pass them through or not.
+        (The exception is SIGKILL: that always produces a WIFSIGNALED
+        wait status, and terminates the process.)
+ 
+    So, to put all that together:
+ 
+    - A traced LWP goes back and forth from running to stopped, until
+      eventually it goes from running to exited or killed.
+ 
+    - Running->stopped transitions are always signal deliveries, yielding
+      WIFSTOPPED wait statuses.
+ 
+    - Stopping->running transitions are generally due to ptrace
+      requests by the debugger.  (The debugger could send signals, but
+      that's messy.)
+ 
+    - Running->exited transitions are due to, duh, the LWP exiting.
+ 
+    - Running->killed transitions are due to a signal being delivered
+      to the LWP that is neither ignored nor caught.
+ 
+ 
+    Under NPTL, this life cycle is a bit different: LWPs simply exit,
+    without creating a zombie; they produce no wait status.  The NPTL
+    libthread_db generates a TD_DEATH event for them, but at the kernel
+    level the only indication that they're gone is that the tkill
+    system call fails with ESRCH ("No such process").
+ 
+    Under LinuxThreads, LWPs remain zombie processes until they're
+    waited for.  Attempts to send them signals while zombies have no
+    effect, but return no error.
+ 
+ 
+    STOPPING A PROCESS
+ 
+    The major challenge here is implementing the lwp_pool_stop_all
+    function.  The only way to stop a running LWP is to send it a
+    SIGSTOP, and then wait for a status acknowledging the stop.  But as
+    explained above, a running LWP could stop at any time of its own
+    accord, so sending it a SIGSTOP is always a race.  By the time you
+    call waitpid, you don't know whether you'll get a status for the
+    SIGSTOP you just sent, or for something else: some other signal, an
+    exit, or a termination by signal.
+ 
+    If the LWP turns out to have exited or died, then that's pretty
+    easy to handle.  Your attempt to send a SIGSTOP will get an error,
+    and then you'll get a wait status for the termination.  A
+    termination status is always the last status you'll get from wait
+    for that LWP, so there'll be no further record of your SIGSTOP.
+ 
+    If the LWP was about to have some other signal delivered to it,
+    then the next wait will return a WIFSTOPPED status for that signal;
+    we'll have to continue the LWP and wait again until we get the
+    status for our SIGSTOP.  The kernel forgets about any signals the
+    LWP has received once it has reported them to us, so it's up to us
+    to keep track of them and report them via lwp_pool_waitpid.  */
+ 
+ 
+ \f
+ /* The LWP structure.  */
+ 
+ /* The states an LWP we're managing might be in.
+ 
+    For the purposes of these states, we classify wait statuses as
+    follows:
+ 
+    - An "interesting" wait status is one that isn't a result of us
+      attaching to the LWP or sending it a SIGSTOP for
+      lwp_pool_stop_all.  It indicates something that happened to the
+      LWP other than as a result of this code's fiddling with it.  We
+      report all interesting wait statuses via lwp_pool_waitpid.
+ 
+    - A "boring" wait status is one that results from our attaching to
+      it or sending it a SIGSTOP for lwp_pool_stop_all.  We do not
+      report these via lwp_pool_stop_all.
+ 
+    Most of these states are combinations of various semi-independent
+    factors, which we'll name and define here:
+ 
+    - RUNNING / STOPPED / DEAD: These are the kernel states of the LWP:
+      it's either running freely and could stop at any moment, is
+      stopped but can be continued, or has died.
+ 
+    - INTERESTING: this LWP has stopped or died with a wait status that
+      has not yet been reported via lwp_pool_waitpid.  It is on the
+      interesting LWP queue.
+ 
+      This never applies to RUNNING LWPs: we never continue an
+      INTERESTING LWP until we've reported its status.
+ 
+      It always applies to DEAD LWPs.
+ 
+    - STOP PENDING: we've sent this LWP a SIGSTOP, or attached to it,
+      but we haven't yet received the boring WIFSTOPPED SIGSTOP status.
+ 
+      This never applies to DEAD LWPs; the wait status that announces a
+      LWP's death is always the last for that LWP.
+ 
+      There's nothing wrong with having STOPPED, un-INTERESTING, and
+      STOP PENDING LWP's, but it turns out that we can always just
+      continue the thread and wait immediately for it, making such a
+      combination unnecessary.
+ 
+      We could do something similar and eliminate the RUNNING, STOP
+      PENDING state, but that state turns out to be handy for error
+      checking.
+ 
+    We could certainly represent these with independent bits or
+    bitfields, but not all combinations are possible.  So instead, we
+    assign each possible combination a distinct enum value, to make it
+    easier to enumerate all the valid possibilities and be sure we've
+    handled them.  */
+ 
+ enum lwp_state {
+ 
+   /* An uninitialized LWP entry.  Only the lookup function itself,
+      hash_find, creates entries in this state, and any function
+      that calls that should put the entry in a meaningful state before
+      returning.  */
+   lwp_state_uninitialized,
+ 
+   /* RUNNING.  This LWP is running --- last we knew.  It may have
+      exited or been terminated by a signal, or it may have had a
+      signal about to be delivered to it.  We won't know until we wait
+      for it.  */
+   lwp_state_running,
+ 
+   /* STOPPED.  This LWP has stopped, and has no interesting status to
+      report.  */
+   lwp_state_stopped,
+ 
+   /* STOPPED, INTERESTING.  This LWP has stopped with an interesting
+      wait status, which we haven't yet reported to the user.  It is on
+      the interesting LWP queue.  */
+   lwp_state_stopped_interesting,
+ 
+   /* DEAD, INTERESTING.  This LWP exited, or was killed by a signal.
+      This LWP is on the interesting LWP queue.  Once we've reported it
+      to the user, we'll delete it altogether.  */
+   lwp_state_dead_interesting,
+ 
+   /* RUNNING, STOP PENDING.  This LWP was running, and will eventually
+      stop with a boring WIFSTOPPED SIGSTOP status, but may report an
+      interesting status first.
+ 
+      It's always safe to wait for a thread in this state, so we do
+      that as soon as possible; there shouldn't be any threads in this
+      state between calls to public lwp_pool functions.  This is an
+      internal-use state.  */
+   lwp_state_running_stop_pending,
+ 
+   /* STOPPED, STOP PENDING, and INTERESTING.  This LWP has stopped with
+      an interesting wait status.  We're also expecting a boring wait
+      status from it.  */
+   lwp_state_stopped_stop_pending_interesting,
+ 
+ };
+ 
+ 
+ /* The thread_db death state.  See the descriptions of the
+    lwp_pool_thread_db_* functions in lwp-pool.h.  */
+ enum death_state {
+ 
+   /* We've received no indication that this thread will exit.  */
+   death_state_running,
+ 
+   /* We've received a TD_DEATH event for this thread, but it hasn't
+      completed its event notification yet.  */
+   death_state_event_received,
+ 
+   /* We've received a TD_DEATH event for this thread, and it has
+      completed its event notification; when we continue it next, we
+      will delete it from the hash table and forget about it
+      entirely.  */
+   death_state_delete_when_continued
+ };
+ 
+ 
+ struct lwp
+ {
+   /* This lwp's PID.  */
+   pid_t pid;
+ 
+   /* The state this LWP is in.  */
+   enum lwp_state state;
+ 
+   /* Its thread_db death notification state.  */
+   enum death_state death_state;
+ 
+   /* If STATE is one of the lwp_state_*_interesting states, then this
+      LWP is on the interesting LWP queue, headed by interesting_queue.
+ 
+      If STATE is lwp_state_running_stop_pending, then this LWP is on
+      the stopping LWP queue, stopping_queue.  (Note that
+      stopping_queue is local to lwp_pool_stop_all; no thread should be
+      in that state by the time that function returns.  */
+   struct lwp *prev, *next;
+ 
+   /* If STATE is one of the lwp_state_*_interesting states, then
+      STATUS is the interesting wait status.  */
+   int status;
+ };
+  
+   
+ \f
+ /* The LWP hash table.  */
+ 
+ /* A hash table of all the live LWP's we know about.
+    hash_population is the number of occupied entries in the table.
+ 
+    hash_size is the total length of the table; it is always a power of
+    two.  We resize the table to ensure that it is between 12.5% and
+    50% occupied.  (Since the table's size is a power of two, resizing
+    the table will always halve or double the populated ratio.  So
+    there should be comfortably more than a factor of two between the
+    maximum and minimum populations, for hysteresis.)
+ 
+    The first slot we try is hash[PID % hash_size].  After C
+    collisions, we try hash[(PID + C * STRIDE) % hash_size], where
+    STRIDE is hash_size / 4 + 1.  The kernel assigns pids sequentially,
+    so a STRIDE of 1, as many hash tables use, would make further
+    collisions very likely.  But since hash_size is always a power of
+    two, and hash_size / 4 + 1 is always odd, they are always
+    relatively prime, so stepping by that many elements each time will
+    eventually visit every table element.  A constant odd stride would
+    be fine, but it's nice to have it scale with the overall population
+    of the table.
+ 
+    The table is an array of pointers to lwp's, rather than a direct
+    array of lwp structures, so that pointers to lwp's don't become
+    invalid when we rehash or delete entries.  */
+ static size_t hash_size, hash_population;
+ static struct lwp **hash;
+ 
+ /* The minimum size for the hash table.  Small for testing.  */
+ enum { minimum_hash_size = 8 };
+ 
+ 
+ /* Return the hash slot for pid PID.  */
+ static int
+ hash_slot (pid_t pid, size_t size)
+ {
+   return pid & (size - 1);
+ }
+ 
+ 
+ /* If there was a collision in SLOT, return the next slot.  */
+ static int
+ hash_next_slot (int slot, size_t size)
+ {
+   int stride = size / 4 + 1;
+ 
+   return (slot + stride) & (size - 1);
+ }
+ 
+ 
+ /* Return the earliest empty hash slot for PID.  */
+ static int
+ hash_empty_slot (pid_t pid)
+ {
+   int slot = hash_slot (pid, hash_size);
+ 
+   /* Since hash_next_slot will eventually visit every slot, and we
+      know the table isn't full, this loop will terminate.  */
+   while (hash[slot])
+     slot = hash_next_slot (slot, hash_size);
+ 
+   return slot;
+ }
+ 
+ 
+ /* Return a new, empty hash table containing ELEMENTS elements.  This has
+    no effect on the LWP pool's global variables.  */
+ static struct lwp **
+ make_hash_table (size_t elements)
+ {
+   struct lwp **hash;
+   size_t size = elements * sizeof (*hash);
+ 
+   hash = malloc (size);
+   memset (hash, 0, size);
+ 
+   return hash;
+ }
+ 
+ 
+ /* Resize hash as needed to ensure that the table's population is
+    between 12.5% and 50% of its size.  */
+ static void
+ resize_hash (void)
+ {
+   struct lwp **new_hash;
+   size_t new_hash_size;
+   int new_hash_population; /* just for sanity checking */
+   int i;
+ 
+   /* Pick a new size.  */
+   new_hash_size = hash_size;
+   while (new_hash_size < hash_population * 2)
+     new_hash_size *= 2;
+   while (new_hash_size > minimum_hash_size
+ 	 && new_hash_size > hash_population * 8)
+     new_hash_size /= 2;
+ 
+   /* We may have re-chosen the minimum table size.  */
+   if (new_hash_size == hash_size)
+     return;
+ 
+   new_hash = make_hash_table (new_hash_size);
+   new_hash_population = 0;
+ 
+   /* Re-insert all the old lwp's in the new table.  */
+   for (i = 0; i < hash_size; i++)
+     if (hash[i])
+       {
+ 	struct lwp *l = hash[i];
+ 	int new_slot = hash_slot (l->pid, new_hash_size);
+ 
+ 	while (new_hash[new_slot])
+ 	  new_slot = hash_next_slot (new_slot, new_hash_size);
+ 
+ 	new_hash[new_slot] = l;
+ 	new_hash_population++;
+       }
+ 
+   if (new_hash_population != hash_population)
+     fprintf (stderr, "ERROR: rehashing changed population from %d to %d\n",
+ 	     hash_population, new_hash_population);
+ 
+   /* Free the old table, and drop in the new one.  */
+   free (hash);
+   hash = new_hash;
+   hash_size = new_hash_size;
+ }
+ 
+ 
+ /* Find an existing hash table entry for LWP.  If there is none,
+    create one in state lwp_state_uninitialized.  */
+ static struct lwp *
+ hash_find (pid_t lwp)
+ {
+   int slot;
+   struct lwp *l;
+ 
+   /* Do we need to initialize the hash table?  */
+   if (! hash)
+     {
+       hash_size = minimum_hash_size;
+       hash = make_hash_table (hash_size);
+       hash_population = 0;
+     }
+ 
+   for (slot = hash_slot (lwp, hash_size);
+        hash[slot];
+        slot = hash_next_slot (slot, hash_size))
+     if (hash[slot]->pid == lwp)
+       return hash[slot];
+ 
+   /* There is no entry for this lwp.  Create one.  */
+   l = malloc (sizeof (*l));
+   l->pid = lwp;
+   l->state = lwp_state_uninitialized;
+   l->death_state = 0;
+   l->next = l->prev = NULL;
+   l->status = 42;
+ 
+   hash[slot] = l;
+   hash_population++;
+ 
+   /* Do we need to resize?  */
+   if (hash_size < hash_population * 2)
+     resize_hash ();
+ 
+   return l;
+ }
+ 
+ 
+ /* Remove the LWP L from the pool.  This does not free L itself.  */
+ static void
+ hash_delete (struct lwp *l)
+ {
+   int slot;
+ 
+   for (slot = hash_slot (l->pid, hash_size);
+        hash[slot];
+        slot = hash_next_slot (slot, hash_size))
+     if (hash[slot]->pid == l->pid)
+       break;
+ 
+   /* We shouldn't ever be asked to delete a 'struct lwp' that isn't in
+      the table.  */
+   assert (hash[slot]);
+ 
+   /* There should be only one 'struct lwp' with a given PID.  */
+   assert (hash[slot] == l);
+ 
+   /* Deleting from this kind of hash table is interesting, because of
+      the way we handle collisions.
+ 
+      For the sake of discussion, pretend that STRIDE is 1 (the
+      reasoning is basically the same either way, but this has less
+      hair).
+ 
+      When we search for an LWP that hashes to slot S, because there
+      may be collisions, the set of slots we'll actually search is the
+      contiguous run of non-empty table entries that starts at S,
+      heading towards higher indices (and possibly wrapping around at
+      the end of the table).  When we find an empty table entry, we
+      give up the search.
+ 
+      When we delete an LWP, if we simply set its slot to zero, that
+      could cause us to cut off later searches too early.  For example,
+      if three LWP's all hash to slot S, and have been placed in slots
+      S, S+1, and S+2, and we set slot S+1 to zero, then a search for
+      the LWP at S+2 will start at S, and then stop at S+1 without ever
+      seeing the right entry at S+2.
+ 
+      Some implementations place a special "deleted" marker in the slot
+      to let searches continue.  But then it's hard to ensure that the
+      table doesn't get choked with deleted markers; and should deleted
+      markers count towards the population for resizing purposes?  It's
+      a mess.
+ 
+      So after clearing a slot, we walk the remainder of the contiguous
+      run of entries and re-hash them all.  If the hash function is
+      doing a good job distributing entries across the table,
+      contiguous runs should be short.  And it had better be good,
+      because this is potentially quadratic.
+ 
+      Of course, if we're going to resize the table, that removes all
+      deleted elements, so we needn't bother with any of this.  */
+ 
+   hash[slot] = NULL;
+   hash_population--;
+ 
+   if (hash_size > minimum_hash_size
+       && hash_size > hash_population * 8)
+     resize_hash ();
+   else
+     for (slot = hash_next_slot (slot, hash_size);
+ 	 hash[slot];
+ 	 slot = hash_next_slot (slot, hash_size))
+       {
+ 	struct lwp *refugee = hash[slot];
+ 
+ 	hash[slot] = NULL;
+ 	hash[hash_empty_slot (refugee->pid)] = refugee;
+       }
+ }
+ 
+ 
+ \f
+ /* Queue functions.  */ 
+ 
+ /* Insert L at the end of the queue headed by QUEUE.  */ 
+ static void
+ queue_enqueue (struct lwp *queue, struct lwp *l)
+ {
+   assert (! l->next && ! l->prev);
+ 
+   l->next = queue;
+   l->prev = queue->prev;
+   l->prev->next = l;
+   l->next->prev = l;
+ }
+ 
+ 
+ /* If L is part of some queue, remove it.  */
+ static void
+ queue_delete (struct lwp *l)
+ {
+   assert (l->next && l->prev);
+ 
+   l->next->prev = l->prev;
+   l->prev->next = l->next;
+   l->next = l->prev = NULL;
+ }
+ 
+ 
+ /* Return non-zero if there is anything in QUEUE, zero otherwise.  */
+ static int
+ queue_non_empty (struct lwp *queue)
+ {
+   return queue->next != queue;
+ }
+ 
+ 
+ /* Return the first LWP from QUEUE, but don't remove it.  If QUEUE is
+    empty, return NULL.  */
+ static struct lwp *
+ queue_first (struct lwp *queue)
+ {
+   struct lwp *l = queue->next;
+ 
+   if (l != queue)
+     return l;
+   else
+     return NULL;
+ }
+ 
+ 
+ \f
+ /* Hashing LWP's, but with error checking and cleanup.  */
+ 
+ 
+ /* Add an entry for LWP to the pool and return it.  There should be no
+    existing entry for LWP; if there is, clean it up.  The returned
+    LWP's state is always lwp_state_uninitialized; the caller must
+    initialize the LWP before returning.  */
+ static struct lwp *
+ hash_find_new (pid_t lwp)
+ {
+   struct lwp *l = hash_find (lwp);
+ 
+   if (l->state != lwp_state_uninitialized)
+     {
+       fprintf (stderr, "ERROR: new LWP %d already in table\n", (int) lwp);
+ 
+       /* Remove ourselves from any queue we might be in.  */
+       if (l->next)
+ 	queue_delete (l);
+     }
+ 
+   l->state = lwp_state_uninitialized;
+ 
+   return l;
+ }
+ 
+ 
+ /* Find an entry for an existing LWP, and return it.  If we have no
+    existing entry for LWP, print an error message, but return the new,
+    uninitialized entry anyway.  */
+ static struct lwp *
+ hash_find_known (pid_t lwp)
+ {
+   struct lwp *l = hash_find (lwp);
+ 
+   if (l->state == lwp_state_uninitialized)
+     fprintf (stderr, "ERROR: unexpected lwp: %d\n", (int) lwp);
+ 
+   return l;
+ }
+ 
+ 
+ \f
+ /* Waiting.  */
+ 
+ 
+ /* The head of the queue of LWP's with interesting wait statuses.
+    Only the prev and next members are meaningful.
+ 
+    Every LWP in one of the lwp_state_*_interesting states should be on
+    this queue.  If an LWP's state is lwp_state_dead_interesting, the
+    LWP is not in the hash table any more.  */
+ static struct lwp interesting_queue
+ = { -1, 0, 0, &interesting_queue, &interesting_queue, 42 };
+ 
+ 
+ static const char *
+ wait_status_str (int status)
+ {
+   static char buf[100];
+ 
+   if (WIFSTOPPED (status))
+     sprintf (buf, "WIFSTOPPED (s) && WSTOPSIG (s) == %d (%s)",
+ 	     WSTOPSIG (status), strsignal (WSTOPSIG (status)));
+   else if (WIFEXITED (status))
+     sprintf (buf, "WIFEXITED (s) && WEXITSTATUS (s) == %d",
+ 	     WEXITSTATUS (status));
+   else if (WIFSIGNALED (status))
+     sprintf (buf, "WIFSIGNALED (s) && WTERMSIG (s) == %d (%s)%s",
+ 	     WTERMSIG (status),
+ 	     strsignal (WTERMSIG (status)),
+ 	     WCOREDUMP (status) ? " && WCOREDUMP(s)" : "");
+   else
+     sprintf (buf, "%d (unrecognized status)", status);
+ 
+   return buf;
+ }
+ 
+ 
+ static const char *
+ wait_flags_str (int flags)
+ {
+   static const struct {
+     int flag;
+     const char *name;
+   } flag_table[] = {
+     { WNOHANG, "WNOHANG" },
+     { WUNTRACED, "WUNTRACED" },
+ #ifdef __WCLONE
+     { __WCLONE, "__WCLONE" },
+ #endif
+ #ifdef __WALL
+     { __WALL, "__WALL" },
+ #endif
+ #ifdef __WNOTHREAD
+     { __WNOTHREAD, "__WNOTHREAD" },
+ #endif
+     { 0, 0 }
+   };
+   static char buf[100];
+   int i;
+ 
+   buf[0] = '\0';
+   for (i = 0; flag_table[i].flag; i++)
+     if (flags & flag_table[i].flag)
+       {
+ 	strcat (buf, flag_table[i].name);
+ 	flags &= ~flag_table[i].flag;
+ 	if (flags)
+ 	  strcat (buf, " | ");
+       }
+ 
+   if (flags)
+     sprintf (buf + strlen (buf), "0x%x", (unsigned) flags);
+ 
+   return buf;
+ }
+ 
+ 
+ static const char *
+ lwp_state_str (enum lwp_state state)
+ {
+   switch (state)
+     {
+     case lwp_state_uninitialized:
+       return "uninitialized";
+     case lwp_state_running:
+       return "running";
+     case lwp_state_stopped:
+       return "stopped";
+     case lwp_state_stopped_interesting:
+       return "stopped_interesting";
+     case lwp_state_dead_interesting:
+       return "dead_interesting";
+     case lwp_state_running_stop_pending:
+       return "running_stop_pending";
+     case lwp_state_stopped_stop_pending_interesting:
+       return "stopped_stop_pending_interesting";
+     default:
+       {
+ 	static char buf[100];
+ 	sprintf (buf, "%d (unrecognized lwp_state)", state);
+ 	return buf;
+       }
+     }
+ }
+ 
+ 
+ static void
+ debug_report_state_change (pid_t lwp,
+ 			   enum lwp_state old,
+ 			   enum lwp_state new)
+ {
+   if (debug_lwp_pool && old != new)
+     fprintf (stderr,
+ 	     "%32s -- %5d -> %-32s\n",
+ 	     lwp_state_str (old), (int) lwp, lwp_state_str (new));
+ }
+ 
+ 
+ /* Wait for a status from the LWP L (or any LWP, if L is NULL),
+    passing FLAGS to waitpid, and record the resulting wait status in
+    the LWP pool appropriately.
+ 
+    If no wait status was available (if FLAGS & WNOHANG), return zero.
+    If we successfully processed some wait status, return 1.  If an
+    error occurs, set errno and return -1.
+ 
+    If waitpid returns an error, print a message to stderr.  */
+ static int
+ wait_and_handle (struct lwp *l, int flags)
+ {
+   int status;
+   pid_t new_pid; 
+   enum lwp_state old_state;
+   
+   /* We can only wait for LWP's that are running.  */
+   if (l)
+     assert (l->state == lwp_state_running
+ 	    || l->state == lwp_state_running_stop_pending);
+ 
+   /* This should be the only call to waitpid in this module, to ensure
+      that we always keep each LWP's state up to date.  In fact, it
+      should be the only call to waitpid used by any module using the
+      LWP pool code at all.  */
+   new_pid = waitpid (l ? l->pid : -1, &status, flags);
+ 
+   if (debug_lwp_pool)
+     {
+       fprintf (stderr,
+ 	       "lwp_pool: wait_and_handle: waitpid (%d, %s, %s) == %d\n",
+ 	       l ? l->pid : -1,
+ 	       (new_pid <= 0 ? "(unset)" : wait_status_str (status)),
+ 	       wait_flags_str (flags),
+ 	       new_pid);
+     }
+ 
+   if (new_pid == -1)
+     {
+       /* If we call fprintf, that'll wipe out the value of errno.  */
+       int saved_errno = errno;
+ 
+       fprintf (stderr, "ERROR: waitpid (%d) failed: %s\n",
+ 	       l ? (int) l->pid : -1,
+ 	       strerror (saved_errno));
+ 
+       errno = saved_errno;
+       return -1;
+     }
+ 
+   if (new_pid == 0)
+     /* No status, so no LWP has changed state.  */
+     return 0;
+ 
+   if (l)
+     {
+       if (l->pid != new_pid)
+ 	{
+ 	  fprintf (stderr, "ERROR: waited for %d, but got %d\n",
+ 		   l->pid, new_pid);
+ 	  l = hash_find_known (new_pid);
+ 	}
+     }
+   else
+     l = hash_find_known (new_pid);
+ 
+   old_state = l->state;
+   
+   l->status = status;
+ 
+   if (WIFEXITED (status) || WIFSIGNALED (status))
+     {
+       /* Remove dead LWP's from the hash table, and put them in the
+ 	 interesting queue.  */
+       hash_delete (l);
+       l->state = lwp_state_dead_interesting;
+       if (l->next)
+ 	queue_delete (l);
+       queue_enqueue (&interesting_queue, l);
+     }
+   else
+     {
+       int stopsig;
+ 
+       assert (WIFSTOPPED (status));
+       
+       stopsig = WSTOPSIG (status);
+ 
+       switch (l->state)
+ 	{
+ 	case lwp_state_uninitialized:
+ 	  /* Might as well clean it up.  */
+ 	case lwp_state_running:
+ 	  /* It stopped, but not because of anything we did, so it's
+ 	     interesting even if it was a SIGSTOP.  */
+ 	  l->state = lwp_state_stopped_interesting;
+ 	  queue_enqueue (&interesting_queue, l);
+ 	  break;
+ 
+ 	case lwp_state_running_stop_pending:
+ 
+ 	  /* If we were in stopping_queue, we're stopped now.  */
+ 	  if (l->next)
+ 	    queue_delete (l);
+ 
+ 	  /* We are expecting a boring SIGSTOP.  Is this it?  */
+ 	  if (stopsig == SIGSTOP)
+ 	    l->state = lwp_state_stopped;
+ 	  else
+ 	    {
+ 	      /* Report this status, but remember that we're still
+ 		 expecting the boring SIGSTOP.  */
+ 	      l->state = lwp_state_stopped_stop_pending_interesting;
+ 	      queue_enqueue (&interesting_queue, l);
+ 	    }
+ 	  break;
+ 
+ 	default:
+ 	  /* The assert at top should prevent any other states from
+ 	     showing up here.  */
+ 	  fprintf (stderr, "ERROR: called waitpid on LWP %d in bad state %s\n",
+ 		   (int) l->pid, lwp_state_str (l->state));
+ 	  abort ();
+ 	  break;
+ 	}
+     }
+ 
+   debug_report_state_change (l->pid, old_state, l->state);
+ 
+   return 1;
+ }
+ 
+ 
+ /* Wait for a pending stop on the running LWP L.  Return non-zero if L
+    ends up in an interesting state, or zero if L ends up in
+    lwp_state_stopped.
+ 
+    Whenever we have an LWP with no interesting status, but with a stop
+    pending, we can always wait on it:
+ 
+    - Since SIGCONT can't be blocked, caught, or ignored, the wait will
+      always return immediately.  The process won't run amok.
+ 
+    - Since the LWP is uninteresting to begin with, we'll end up with
+      at most one interesting wait status to report; no need to queue
+      up multiple statuses per LWP (which we'd rather not implement if
+      we can avoid it).
+ 
+    By always waiting immediately, we avoid the need for a state like
+    lwp_state_stopped_stop_pending.
+ 
+    So, this function takes a thread in lwp_state_running_stop_pending,
+    and puts that thread in either lwp_state_stopped (no stop pending)
+    or some INTERESTING state.  It's really just
+    wait_and_handle, with some error checking wrapped around
+    it.  */
+ static int
+ check_stop_pending (struct lwp *l)
+ {
+   assert (l->state == lwp_state_running_stop_pending);
+ 
+   wait_and_handle (l, __WALL);
+ 
+   switch (l->state)
+     {
+     case lwp_state_stopped:
+       return 0;
+ 
+     case lwp_state_stopped_stop_pending_interesting:
+     case lwp_state_stopped_interesting:
+     case lwp_state_dead_interesting:
+       return 1;
+ 
+     default:
+       fprintf (stderr,
+ 	       "ERROR: checking lwp %d for pending stop yielded "
+ 	       "bad state %s\n",
+ 	       (int) l->pid, lwp_state_str (l->state));
+       abort ();
+       break;
+     }
+ }
+ 
+ 
+ pid_t
+ lwp_pool_waitpid (pid_t pid, int *stat_loc, int options)
+ {
+   struct lwp *l;
+   enum lwp_state old_state;
+   
+   if (debug_lwp_pool)
+     fprintf (stderr, "lwp_pool_waitpid (%d, stat_loc, %s)\n",
+ 	     (int) pid, wait_flags_str (options));
+ 
+   /* Check that we're not being passed arguments that would be
+      meaningful for the real waitpid, but that we can't handle.  */
+   assert (pid == -1 || pid > 0);
+   assert (! (options & ~WNOHANG));
+ 
+   /* Do the wait, and choose an LWP to report on.  */
+   if (pid == -1)
+     {
+       /* Handle wait statuses of any sort until something appears on
+ 	 the interesting queue.  */
+       while (! queue_non_empty (&interesting_queue))
+ 	{
+ 	  int result = wait_and_handle (NULL, options | __WALL);
+ 
+ 	  if (result <= 0)
+ 	    return result;
+ 	}
+ 
+       l = queue_first (&interesting_queue);
+     }
+   else
+     {
+       /* Waiting for a status from a specific pid PID.  */
+       l = hash_find_known (pid);
+ 
+       /* We should only wait for known, running LWP's.  */
+       assert (l->state == lwp_state_running
+ 	      || l->state == lwp_state_running_stop_pending);
+ 
+       /* Wait until this pid is no longer running.  */
+       while (l->state == lwp_state_running
+ 	     || l->state == lwp_state_running_stop_pending)
+ 	{
+ 	  int result = wait_and_handle (l, options | __WALL);
+ 
+ 	  if (result <= 0)
+ 	    return result;
+ 	}
+     }
+ 
+   /* Gather info from L early, in case we free it.  */
+   pid = l->pid;
+   old_state = l->state;
+   if (stat_loc)
+     *stat_loc = l->status;
+ 
+   /* The INTERESTING states specifically mean that the LWP has a
+      status which should be reported to the user, but that hasn't been
+      yet.  Now we're about to report that status, so we need to mark
+      interesting LWP's as uninteresting.  */
+   switch (l->state)
+     {
+     case lwp_state_uninitialized:
+     case lwp_state_running:
+     case lwp_state_stopped:
+     case lwp_state_running_stop_pending:
+       /* These are uninteresting states.  The waiting code above
+ 	 should never have chosen an LWP in one of these states.  */
+       fprintf (stderr,
+ 	       "ERROR: %s: selected uninteresting LWP %d state %s\n",
+ 	       __func__, l->pid, lwp_state_str (l->state));
+       abort ();
+       break;
+ 
+     case lwp_state_stopped_interesting:
+       /* Now that we've reported this wait status to the user, the LWP
+ 	 is not interesting any more.  */
+       l->state = lwp_state_stopped;
+       queue_delete (l);
+       debug_report_state_change (l->pid, old_state, l->state);
+       break;
+ 
+     case lwp_state_dead_interesting:
+       /* Once we've reported this status, we have washed our hands of
+ 	 this LWP entirely.  */
+       queue_delete (l);
+       free (l);
+       if (debug_lwp_pool)
+ 	fprintf (stderr, 
+ 		 "lwp_pool: %s: LWP %d state dead_interesting -> freed\n",
+ 		 __func__, pid);
+       break;
+ 
+     case lwp_state_stopped_stop_pending_interesting:
+       /* We're about to report this LWP's status, making it
+ 	 uninteresting, but it's still got a stop pending.  So a state
+ 	 like lwp_state_stopped_stop_pending would seem reasonable.
+ 
+ 	 However, this is the only place such a state would occur.  By
+ 	 removing the LWP from the interesting queue and continuing
+ 	 it, we can go directly from
+ 	 lwp_state_stopped_stop_pending_interesting to
+ 	 lwp_state_running_stop_pending.
+ 
+ 	 Since SIGSTOP cannot be blocked, caught, or ignored, we know
+ 	 continuing the LWP won't actually allow it to run anywhere;
+ 	 it just allows it to report another status.  */
+       queue_delete (l);
+       continue_lwp (l->pid, 0);
+       l->state = lwp_state_running_stop_pending;
+       debug_report_state_change (l->pid, old_state, l->state);
+       check_stop_pending (l);
+       break;
+ 
+     default:
+       fprintf (stderr, "ERROR: lwp %d in bad state: %s\n",
+ 	       (int) l->pid, lwp_state_str (l->state));
+       abort ();
+       break;
+     }
+ 
+   return pid;
+ }
+ 
+ 
+ \f
+ /* libthread_db-based death handling, for NPTL.  */
+ 
+ 
+ static const char *
+ death_state_str (enum death_state d)
+ {
+   switch (d)
+     {
+     case death_state_running: return "death_state_running";
+     case death_state_event_received: return "death_state_event_received";
+     case death_state_delete_when_continued: 
+       return "death_state_delete_when_continued";
+     default:
+       {
+ 	static char buf[100];
+ 	sprintf (buf, "%d (unrecognized death_state)", d);
+ 	return buf;
+       }
+     }
+ }
+ 
+ 
+ static void
+ debug_report_death_state_change (pid_t lwp,
+ 				 enum death_state old,
+ 				 enum death_state new)
+ {
+   if (debug_lwp_pool && old != new)
+     fprintf (stderr,
+ 	     "%32s -- %5d -> %-32s\n",
+ 	     death_state_str (old), (int) lwp, death_state_str (new));
+ }
+ 
+ 
+ void
+ lwp_pool_thread_db_death_event (pid_t pid)
+ {
+   struct lwp *l = hash_find_known (pid);
+   enum death_state old_state = l->death_state;
+ 
+   if (debug_lwp_pool)
+     fprintf (stderr, "lwp_pool_thread_db_death_event (%d)\n",
+ 	     (int) pid);
+ 
+   if (l->state == lwp_state_uninitialized)
+     {
+       /* hash_find_known has already complained about this; we just
+ 	 clean up.  */
+       hash_delete (l);
+       free (l);
+       return;
+     }
+ 
+   if (l->death_state == death_state_running)
+     l->death_state = death_state_event_received;
+ 
+   debug_report_death_state_change (pid, old_state, l->death_state);
+ }
+ 
+ 
+ void
+ lwp_pool_thread_db_death_notified (pid_t pid)
+ {
+   struct lwp *l = hash_find_known (pid);
+   enum death_state old_state = l->death_state;
+ 
+   if (debug_lwp_pool)
+     fprintf (stderr, "lwp_pool_thread_db_death_notified (%d)\n",
+ 	     (int) pid);
+ 
+   if (l->state == lwp_state_uninitialized)
+     {
+       /* hash_find_known has already complained about this; we just
+ 	 clean up.  */
+       hash_delete (l);
+       free (l);
+       return;
+     }
+ 
+   if (l->death_state == death_state_event_received)
+     l->death_state = death_state_delete_when_continued;
+ 
+   debug_report_death_state_change (pid, old_state, l->death_state);
+ }
+ 
+ 
+ /* Subroutine for the 'continue' functions.  If the LWP L should be
+    forgotten once continued, delete it from the hash table, and free
+    its storage; we'll get no further wait status from it to indicate
+    that it's gone.  */
+ static void
+ check_for_exiting_nptl_lwp (struct lwp *l)
+ {
+   if (l->state == lwp_state_running
+       && l->death_state == death_state_delete_when_continued)
+     {
+       if (debug_lwp_pool)
+ 	fprintf (stderr,
+ 		 "lwp_pool: %s: NPTL LWP %d will disappear silently\n",
+ 		 __func__, l->pid);
+       assert (! l->next && ! l->prev);
+       hash_delete (l);
+       free (l);
+     }
+ }
+ 
+ 
+ 
+ \f
+ /* Stopping and continuing.  */
+ 
+ 
+ void
+ lwp_pool_stop_all (void)
+ {
+   int i;
+ 
+   if (debug_lwp_pool)
+     fprintf (stderr, "lwp_pool_stop_all ()\n");
+ 
+   /* The head of the queue of running LWP's that we are stopping.
+      Only the prev and next members are meaningful.  */
+   struct lwp stopping_queue;
+ 
+   stopping_queue.next = stopping_queue.prev = &stopping_queue;
+ 
+   /* First, put every LWP that's not already STOPPED or DEAD in a STOP
+      PENDING state, and put them all on stopping_queue.  */ 
+   for (i = 0; i < hash_size; i++)
+     {
+       struct lwp *l = hash[i];
+ 
+       if (l)
+ 	{
+ 	  enum lwp_state old_state = l->state;
+ 
+ 	  switch (l->state)
+ 	    {
+ 	      /* There should never be 'uninitialized' entries left in
+ 		 the table.  Whoever created them ought to have put them
+ 		 in some meaningful state before returning.  */
+ 	    case lwp_state_uninitialized:
+ 	      assert (l->state != lwp_state_uninitialized);
+ 	      break;
+ 
+ 	    case lwp_state_running:
+ 	      /* A 'no such process' error here indicates an NPTL thread
+ 		 that has exited.  */
+ 	      kill_lwp (l->pid, SIGSTOP);
+ 	      l->state = lwp_state_running_stop_pending;
+ 	      queue_enqueue (&stopping_queue, l);
+ 	      break;
+ 
+ 	    case lwp_state_stopped:
+ 	    case lwp_state_stopped_interesting:
+ 	    case lwp_state_dead_interesting:
+ 	    case lwp_state_stopped_stop_pending_interesting:
+ 	      /* Nothing needs to be done here.  */
+ 	      break;
+ 
+ 	    case lwp_state_running_stop_pending:
+ 	      /* Threads should never be in this state between calls to
+ 		 public lwp_pool functions.  */
+ 	      assert (l->state != lwp_state_running_stop_pending);
+ 	      break;
+ 
+ 	    default:
+ 	      fprintf (stderr, "ERROR: lwp %d in bad state: %s\n",
+ 		       (int) l->pid, lwp_state_str (l->state));
+ 	      abort ();
+ 	      break;
+ 	    }
+ 
+ 	  debug_report_state_change (l->pid, old_state, l->state);
+ 	}
+     }
+ 
+   /* Gather wait results until the stopping queue is empty.  */
+   while (queue_non_empty (&stopping_queue))
+     if (wait_and_handle (NULL, __WALL) < 0)
+       {
+ 	fprintf (stderr, "ERROR: lwp_pool_stop_all wait failed: %s",
+ 		 strerror (errno));
+ 	return;
+       }
+ 
+   /* Now all threads should be stopped or dead.  But let's check.  */
+   for (i = 0; i < hash_size; i++)
+     {
+       struct lwp *l = hash[i];
+       if (l)
+ 	switch (l->state)
+ 	  {
+ 	  case lwp_state_uninitialized:
+ 	    assert (l->state != lwp_state_uninitialized);
+ 	    break;
+ 
+ 	  case lwp_state_running:
+ 	  case lwp_state_running_stop_pending:
+ 	    fprintf (stderr,
+ 		     "ERROR: lwp_pool_stop_all failed: LWP %d still running\n",
+ 		     (int) l->pid);
+ 	    break;
+ 
+ 	  case lwp_state_stopped:
+ 	  case lwp_state_stopped_interesting:
+ 	  case lwp_state_dead_interesting:
+ 	  case lwp_state_stopped_stop_pending_interesting:
+ 	    /* That's all as it should be.  */
+ 	    break;
+ 
+ 	  default:
+ 	    fprintf (stderr, "ERROR: lwp %d in bad state: %s\n",
+ 		     (int) l->pid, lwp_state_str (l->state));
+ 	    abort ();
+ 	    break;
+ 	  }
+     }
+ }
+ 
+ 
+ void
+ lwp_pool_continue_all (void)
+ {
+   int i;
+ 
+   if (debug_lwp_pool)
+     fprintf (stderr, "lwp_pool_continue_all ()\n");
+ 
+   /* This loop makes every LWP either INTERESTING, or RUNNING.  */
+   for (i = 0; i < hash_size; i++)
+     {
+       struct lwp *l = hash[i];
+ 
+       if (l)
+ 	{
+ 	  enum lwp_state old_state = l->state;
+ 
+ 	  switch (l->state)
+ 	    {
+ 	      /* There should never be 'uninitialized' entries left in
+ 		 the table.  Whoever created them ought to have put them
+ 		 in some meaningful state before returning.  */
+ 	    case lwp_state_uninitialized:
+ 	      assert (l->state != lwp_state_uninitialized);
+ 	      break;
+ 
+ 	    case lwp_state_running:
+ 	      /* It's already running, so nothing needs to be done.  */
+ 	      break;
+ 
+ 	    case lwp_state_stopped:
+ 	      if (continue_lwp (l->pid, 0) == 0)
+ 		l->state = lwp_state_running;
+ 	      break;
+ 
+ 	    case lwp_state_stopped_interesting:
+ 	    case lwp_state_dead_interesting:
+ 	    case lwp_state_stopped_stop_pending_interesting:
+ 	      /* We still have an unreported wait status here, so leave it
+ 		 alone; we'll report it.  */
+ 	      break;
+ 
+ 	    case lwp_state_running_stop_pending:
+ 	      /* There shouldn't be any threads in this state at this
+ 		 point.  We should be calling check_stop_pending or
+ 		 wait_and_handle as soon as we create them.  */
+ 	      assert (l->state != lwp_state_running_stop_pending);
+ 	      break;
+ 
+ 	    default:
+ 	      fprintf (stderr, "ERROR: lwp %d in bad state: %s\n", 
+ 		       (int) l->pid, lwp_state_str (l->state));
+ 	      abort ();
+ 	      break;
+ 	    }
+ 
+ 	  debug_report_state_change (l->pid, old_state, l->state);
+ 
+ 	  check_for_exiting_nptl_lwp (l);
+ 	}
+     }
+ }
+ 
+ 
+ int
+ lwp_pool_continue_lwp (pid_t pid, int signal)
+ {
+   struct lwp *l = hash_find_known (pid);
+   enum lwp_state old_state = l->state;
+   int result;
+ 
+   if (debug_lwp_pool)
+     fprintf (stderr, "lwp_pool_continue_lwp (%d, %d)\n",
+ 	     (int) pid, signal);
+ 
+   /* We should only be continuing stopped threads, with no interesting
+      status to report.  And we should have cleaned up any pending
+      stops as soon as we created them.  */
+   assert (l->state == lwp_state_stopped);
+   result = continue_lwp (l->pid, signal);
+   if (result == 0)
+     l->state = lwp_state_running;
+   debug_report_state_change (l->pid, old_state, l->state);
+ 
+   check_for_exiting_nptl_lwp (l);
+ 
+   return result;
+ }
+ 
+ 
+ int
+ lwp_pool_singlestep_lwp (struct gdbserv *serv, pid_t lwp, int signal)
+ {
+   struct lwp *l = hash_find_known (lwp);
+   enum lwp_state old_state = l->state;
+   int result;
+ 
+   if (debug_lwp_pool)
+     fprintf (stderr, "lwp_pool_singlestep_lwp (%p, %d, %d)\n",
+ 	     serv, (int) lwp, signal);
+ 
+   /* We should only be single-stepping known, stopped threads, with no
+      interesting status to report.  And we should have cleaned up any
+      pending stops as soon as we created them.  */
+   assert (l->state == lwp_state_stopped);
+   result = singlestep_lwp (serv, l->pid, signal);
+   if (result == 0)
+     l->state = lwp_state_running;
+   debug_report_state_change (l->pid, old_state, l->state);
+   return result;
+ }
+ 
+ 
+ \f
+ /* Adding new LWP's to the pool.  */
+ 
+ void
+ lwp_pool_new_stopped (pid_t pid)
+ {
+   struct lwp *l = hash_find_new (pid);
+ 
+   if (debug_lwp_pool)
+     fprintf (stderr, "lwp_pool_new_stopped (%d)\n", (int) pid);
+ 
+   l->state = lwp_state_stopped;
+ 
+   if (debug_lwp_pool)
+     fprintf (stderr, "lwp_pool: %s: new LWP %d state %s\n",
+ 	     __func__, l->pid, lwp_state_str (l->state));
+ }
+ 
+ 
+ int
+ lwp_pool_attach (pid_t pid)
+ {
+   /* Are we already managing this LWP?  */
+   struct lwp *l = hash_find (pid);
+ 
+   if (debug_lwp_pool)
+     fprintf (stderr, "lwp_pool_attach (%d)\n", (int) pid);
+ 
+   if (l->state == lwp_state_uninitialized)
+     {
+       /* No, we really need to attach to it.  */
+       int status = attach_lwp (pid);
+ 
+       if (status)
+ 	{
+ 	  /* Forget about the lwp.  */
+ 	  hash_delete (l);
+ 	  free (l);
+ 	  return status;
+ 	}
+ 
+       /* Since we attached to it, we'll get a SIGSTOP for this
+ 	 eventually.  Wait for it now, to put it in either
+ 	 lwp_state_stopped, or in some interesting state.  */
+       l->state = lwp_state_running_stop_pending;
+ 
+       if (debug_lwp_pool)
+ 	fprintf (stderr, "lwp_pool: %s: new LWP %d state %s\n",
+ 		 __func__, l->pid, lwp_state_str (l->state));
+ 
+       check_stop_pending (l);
+ 
+       return 1;
+     }
+      
+   return 0;
+ }
Index: rda/unix/server.h
===================================================================
RCS file: /cvs/src/src/rda/unix/server.h,v
retrieving revision 1.2.2.1
diff -c -r1.2.2.1 server.h
*** rda/unix/server.h	26 Oct 2004 23:04:44 -0000	1.2.2.1
--- rda/unix/server.h	23 Nov 2004 05:52:21 -0000
***************
*** 51,57 ****
--- 51,64 ----
    char *executable;
    char **argv;
    int  pid;
+ 
+   /* The last thread we reported an event for.  */
    struct gdbserv_thread *event_thread;
+ 
+   /* If the client continues or single-steps a single thread, leaving
+      the rest of the program stopped, this is that thread.  */
+   struct gdbserv_thread *focus_thread;
+ 
    int  stop_status;
    int  stop_signal;
    long signal_to_send;
Index: rda/unix/thread-db.c
===================================================================
RCS file: /cvs/src/src/rda/unix/thread-db.c,v
retrieving revision 1.9.2.6
diff -c -r1.9.2.6 thread-db.c
*** rda/unix/thread-db.c	1 Nov 2004 21:55:37 -0000	1.9.2.6
--- rda/unix/thread-db.c	23 Nov 2004 05:52:23 -0000
***************
*** 41,46 ****
--- 41,48 ----
  #include "arch.h"
  #include "gdb_proc_service.h"
  #include "gdbserv-thread-db.h"
+ #include "lwp-ctrl.h"
+ #include "lwp-pool.h"
  
  /* Make lots of noise (debugging output). */
  int thread_db_noisy = 1;
***************
*** 184,208 ****
  /* Define the struct gdbserv_thread object. */
  
  struct gdbserv_thread {
    td_thrinfo_t ti;
  
-   /* True if we have attached to this thread, but haven't yet
-      continued or single-stepped it.  */
-   int attached : 1;
- 
-   /* True if we have sent this thread a SIGSTOP (because some other
-      thread has had something interesting happen, and we want the
-      whole program to stop), but not yet continued or single-stepped it.  */
-   int stopped : 1;
- 
-   /* True if we have called waitpid, and consumed any extraneous wait
-      statuses created by attaching, stopping, etc.  */
-   int waited : 1;
- 
-   /* True if we last single-stepped this thread, instead of continuing
-      it.  When choosing one event out of many to report to GDB, we
-      give stepped events higher priority than some others.  */
-   int stepping : 1;
    struct gdbserv_thread *next;
  } *thread_list;
  
--- 186,242 ----
  /* Define the struct gdbserv_thread object. */
  
  struct gdbserv_thread {
+ 
+   /* A note about thread states (TI.ti_state):
+ 
+      When a thread calls pthread_exit, it first runs all its
+      cancellation cleanup functions (see pthread_cleanup_push), and
+      then calls destructors for its thread-specific data (see
+      pthread_key_create).  If the thread is not detached, it then
+      makes the pointer passed to pthread_exit available for thread(s)
+      calling pthread_join.  Then, the thread terminates.
+ 
+      If a thread's start function, passed to pthread_create, returns,
+      then an implementation may assume that the cleanups have run
+      already (the POSIX threads interface requires user code to ensure
+      that this is the case).  So it just runs the destructors, and
+      terminates.
+ 
+      In glibc 2.3.3's NPTL, if a thread calls pthread_exit,
+      libthread_db says its state is TD_THR_ZOMBIE while it runs its
+      cleanups and destructors.  However, if a thread simply returns
+      from its start function, then libthread_db says it's
+      TD_THR_ACTIVE while it runs its destructors.  Other versions of
+      libthread_db seem to do inconsistent things like that as well.
+ 
+      A note about LWP id's (TI.ti_lid):
+ 
+      After a thread has exited, the libthread_db's for LinuxThreads
+      and NPTL report its ti_lid as being equal to the pid of the main
+      thread.  To be precise, it reports the LWP id's as being equal to
+      ps_getpid (PROCHANDLE), where PROCHANDLE is the 'struct
+      ps_prochandle' passed to td_ta_new when we created the thread
+      agent in the first place.
+ 
+      The idea here seems to be, "There are no kernel-level resources
+      devoted to the thread any more that a debugger could talk to, so
+      let's hand the debugger whatever info we used to create the
+      thread agent in the first place, so it can at least talk to what
+      remains of the process."  This is a nice thought, but since the
+      thread_db interface doesn't give us any way to stop threads or
+      wait for them, the debugger needs to break through the
+      abstraction and operate on LWP's directly to do those things.
+      libthread_db's attempt to be helpful, together with the
+      sloppiness in the ti_state handling, makes figuring whether there
+      even *is* an LWP to operate on pretty difficult.
+ 
+      If we attach to a process using some pid P, whose corresponding
+      thread happens to have called pthread_exit, then there's no way
+      for us to distinguish threads whose lwp is reported as P because
+      they're dead from the thread whose lwp is reported as P because
+      it actually is: they're all zombies.  */
    td_thrinfo_t ti;
  
    struct gdbserv_thread *next;
  } *thread_list;
  
***************
*** 215,220 ****
--- 249,255 ----
    struct gdbserv_thread *new = malloc (sizeof (struct gdbserv_thread));
  
    /* First cut -- add to start of list. */
+   memset (new, 0, sizeof (*new));
    memcpy (&new->ti, ti, sizeof (td_thrinfo_t));
    new->next = thread_list;
    thread_list = new;
***************
*** 284,316 ****
    return tmp;
  }
  
- static struct gdbserv_thread *
- thread_list_lookup_by_lid (lwpid_t pid)
- {
-   struct gdbserv_thread *tmp;
- 
-   for (tmp = thread_list; tmp; tmp = tmp->next)
-     if (tmp->ti.ti_lid == pid)
-       break;
- 
-   return tmp;
- }
- 
- /* Return a pointer to a statically allocated string describing
-    THREAD.  For debugging.  */
- static const char *
- thread_debug_name (struct gdbserv_thread *thread)
- {
-   if (thread)
-     {
-       static char buf[50];
-       sprintf (buf, "(%p %d)", thread, thread->ti.ti_lid);
-       return buf;
-     }
-   else
-     return "(null thread)";
- }
- 
  /* A copy of the next lower layer's target vector, before we modify it. */
  static struct gdbserv_target parentvec;
  
--- 319,324 ----
***************
*** 424,432 ****
    case TD_THR_UNKNOWN:		return "<officially unknown>";
    case TD_THR_STOPPED:		return "<stopped>";
    case TD_THR_RUN:		return "<running>";
!   case TD_THR_ACTIVE:		return "<active> ";
!   case TD_THR_ZOMBIE:		return "<zombie> ";
!   case TD_THR_SLEEP:		return "<sleep>  ";
    case TD_THR_STOPPED_ASLEEP:	return "<stopped asleep>";
    default:
      sprintf (buf, "<unknown state code %d>", statecode);
--- 432,440 ----
    case TD_THR_UNKNOWN:		return "<officially unknown>";
    case TD_THR_STOPPED:		return "<stopped>";
    case TD_THR_RUN:		return "<running>";
!   case TD_THR_ACTIVE:		return "<active>";
!   case TD_THR_ZOMBIE:		return "<zombie>";
!   case TD_THR_SLEEP:		return "<sleep>";
    case TD_THR_STOPPED_ASLEEP:	return "<stopped asleep>";
    default:
      sprintf (buf, "<unknown state code %d>", statecode);
***************
*** 438,444 ****
  thread_db_type_str (td_thr_type_e type)
  {
    switch (type) {
!   case TD_THR_USER:		return "<user>  ";
    case TD_THR_SYSTEM:		return "<system>";
    default:                      return "<unknown>";
    }
--- 446,452 ----
  thread_db_type_str (td_thr_type_e type)
  {
    switch (type) {
!   case TD_THR_USER:		return "<user>";
    case TD_THR_SYSTEM:		return "<system>";
    default:                      return "<unknown>";
    }
***************
*** 505,510 ****
--- 513,544 ----
    }
  }
  
+ /* Return a pointer to a statically allocated string describing
+    THREAD.  For debugging.  The resulting string has the form
+    "(TID STATE LID PTR)", where:
+    - TID is the thread ID, which you'll see in the user program and
+      in the remote protocol,
+    - STATE is the state of the thread, which can be important in 
+      deciding how to interpret LID,
+    - LID is the PID of the underlying LWP, and
+    - PTR is the address of the 'struct thread' in RDA, so you can
+      actually mess with it further if you want.  */
+ static const char *
+ thread_debug_name (struct gdbserv_thread *thread)
+ {
+   if (thread)
+     {
+       static char buf[100];
+       sprintf (buf, "(0x%lx %s %d %p)",
+ 	       (unsigned long) thread->ti.ti_tid,
+ 	       thread_db_state_str (thread->ti.ti_state),
+ 	       thread->ti.ti_lid,
+ 	       thread);
+       return buf;
+     }
+   else
+     return "(null thread)";
+ }
  
  /* flag which indicates if the map_id2thr cache is valid.  See below.  */
  static int thread_db_map_id2thr_cache_valid;
***************
*** 551,556 ****
--- 585,614 ----
    thread_db_map_id2thr_cache_valid = 0;
  }
  
+ static struct gdbserv_thread *
+ thread_list_lookup_by_lid (lwpid_t pid)
+ {
+   struct gdbserv_thread *t;
+   struct gdbserv_thread *second_choice = NULL;
+ 
+   /* Ideally, we'd be using td_ta_map_lwp2thr here.  */
+ 
+   for (t = thread_list; t; t = t->next)
+     if (t->ti.ti_lid == pid)
+       {
+ 	/* libthread_db reports the ti_lid of a deceased thread as
+ 	   being equal to ps_getpid (&proc_handle).  So be a bit
+ 	   skeptical of those.  */
+ 	if (pid == proc_handle.pid
+ 	    && (t->ti.ti_state == TD_THR_ZOMBIE
+ 		|| t->ti.ti_state == TD_THR_UNKNOWN))
+ 	  second_choice = t;
+ 	else return t;
+       }
+ 
+   return second_choice;
+ }
+ 
  /* The regset cache object.  This object keeps track of the most
     recently fetched or set gregset (of a particular type) and whether
     or not it needs to still needs to be synchronized with the target.  */
***************
*** 1301,1313 ****
  static void
  attach_thread (struct gdbserv_thread *thread)
  {
!   if (thread->ti.ti_lid   != 0 &&
!       thread->ti.ti_state != TD_THR_ZOMBIE)	/* Don't attach a zombie. */
      {
!       if (attach_lwp (thread->ti.ti_lid) == 0)
! 	thread->attached = 1;
!       else
! 	thread->attached = 0;
      }
  }
  
--- 1359,1387 ----
  static void
  attach_thread (struct gdbserv_thread *thread)
  {
!   if (thread->ti.ti_lid != 0)
      {
!       /* We attach to all threads with a plausible LWP PID, including
! 	 TD_THR_ZOMBIE threads.  libthread_db sometimes reports
! 	 threads still executing cleanups or thread-specific data
! 	 destructors as zombies, so it may be important to attach to
! 	 them.
! 
! 	 libthread_db never reports an invalid LWP PID in ti.ti_lid,
! 	 even when the LWP has exited --- in that case, it returns
! 	 ps_getpid (&proc_handle).  The LWP pool code tolerates
! 	 multiple requests to attach to the same PID.  */
!       int status = lwp_pool_attach (thread->ti.ti_lid);
! 
!       /* If we're using signals to communicate with the thread
! 	 library, send the newly attached thread the restart
! 	 signal.  It will remain stopped, but it will receive the
! 	 signal as soon as we continue it.  */
!       if (got_thread_signals)
! 	{
! 	  if (status == 1)
! 	    kill_lwp (thread->ti.ti_lid, restart_signal);
! 	}
      }
  }
  
***************
*** 1339,1350 ****
  	  if (thread_db_noisy)
  	    fprintf (stderr, "(new thread %s)\n", thread_debug_name (thread));
  
! 	  /* Now make sure we've attached to it.  
! 	     Skip the main pid (already attached). */
! 	  if (thread->ti.ti_lid != proc_handle.pid)
! 	    {
! 	      attach_thread (thread);
! 	    }
  
  	  if (using_thread_db_events)
  	    {
--- 1413,1419 ----
  	  if (thread_db_noisy)
  	    fprintf (stderr, "(new thread %s)\n", thread_debug_name (thread));
  
! 	  attach_thread (thread);
  
  	  if (using_thread_db_events)
  	    {
***************
*** 1372,1378 ****
     If not, prune it from the list. */
  
  static void
! update_thread_list (void)
  {
    struct gdbserv_thread *thread, *next;
    td_thrhandle_t handle;
--- 1441,1447 ----
     If not, prune it from the list. */
  
  static void
! update_thread_list (struct child_process *process)
  {
    struct gdbserv_thread *thread, *next;
    td_thrhandle_t handle;
***************
*** 1404,1409 ****
--- 1473,1481 ----
  	      /* Thread is no longer "valid".
  	         By the time this happens, it's too late for us to 
  	         detach from it.  Just delete it from the list.  */
+ 
+ 	      if (thread == process->focus_thread)
+ 		process->focus_thread = NULL;
  	      
  	      delete_thread_from_list (thread);
  	    }
***************
*** 1422,1428 ****
        /* First request -- build up thread list using td_ta_thr_iter. */
        /* NOTE: this should be unnecessary, once we begin to keep the
  	 list up to date all the time. */
!       update_thread_list ();
      }
    return next_thread_in_list (thread);
  }
--- 1494,1501 ----
        /* First request -- build up thread list using td_ta_thr_iter. */
        /* NOTE: this should be unnecessary, once we begin to keep the
  	 list up to date all the time. */
!       struct child_process *process = gdbserv_target_data (serv);
!       update_thread_list (process);
      }
    return next_thread_in_list (thread);
  }
***************
*** 1600,2023 ****
  {
    char *info = malloc (128);
  
!   sprintf (info, "PID %d Type %s State %s",
! 	   thread->ti.ti_lid, 
  	   thread_db_type_str (thread->ti.ti_type),
! 	   thread_db_state_str (thread->ti.ti_state));
!   return info;
! }
! 
! /* Function: stop_thread 
!    Use SIGSTOP to force a thread to stop. */
! 
! static void
! stop_thread (struct gdbserv_thread *thread)
! {
!   if (thread->ti.ti_lid != 0)
!     {
!       if (thread_db_noisy)
! 	fprintf (stderr, "(stop thread %s)\n", thread_debug_name (thread));
!       if (stop_lwp (thread->ti.ti_lid) == 0)
! 	thread->stopped = 1;
!       else
! 	thread->stopped = 0;
!     }
! }
! 
! /* Function: stop_all_threads
!    Use SIGSTOP to make sure all child threads are stopped.
!    Do not send SIGSTOP to the event thread, or to any 
!    new threads that have just been attached. */
! 
! static void
! stop_all_threads (struct child_process *process)
! {
!   struct gdbserv_thread *thread;
! 
!   for (thread = first_thread_in_list ();
!        thread;
!        thread = next_thread_in_list (thread))
!     {
!       if (thread->ti.ti_lid == process->pid)
! 	{
! 	  /* HACK: mark him stopped.
! 	     It would make more sense to do this in
! 	     thread_db_check_child_state, where we received his
! 	     waitstatus and thus know he's stopped.  But that code is
! 	     also used when we don't have a thread list yet, so the
! 	     'struct gdbserv_thread' whose 'stopped' flag we want to
! 	     set may not exist.  */
! 	  thread->stopped = 1;
! 	  continue;	/* This thread is already stopped. */
! 	}
!       /* All threads must be stopped, unless
! 	 a) they have only just been attached, or 
! 	 b) they're already stopped. */
!       if (!thread->attached && !thread->stopped &&
! 	  thread->ti.ti_state != TD_THR_ZOMBIE &&
! 	  thread->ti.ti_state != TD_THR_UNKNOWN)
! 	stop_thread (thread);
!     }
! }
! 
! /* A list of signals that have been prematurely sucked out of the threads.
!    Because of the complexities of linux threads, we must send SIGSTOP to
!    every thread, and then call waitpid on the thread to retrieve the 
!    SIGSTOP event.  Sometimes another signal is pending on the thread,
!    and we get that one by mistake.  Throw all such signals into this
!    list, and send them back to their respective threads once we're
!    finished calling waitpid. */
! 
! static struct event_list {
!   struct gdbserv_thread *thread;
!   union wait waited;
!   int selected;
!   int thread_db_event;
! } *pending_events;
! static int pending_events_listsize;
! static int pending_events_top;
! 
! /* Function: add_pending_event
!    Helper function for wait_all_threads.
! 
!    When we call waitpid for each thread (trying to consume the SIGSTOP
!    events that we sent from stop_all_threads), we sometimes inadvertantly
!    get other events that we didn't send.  We pend these to a list, and 
!    then resend them to the child threads after our own SIGSTOP events
!    have been consumed.  
! 
!    This list will be used to choose which of the possible events 
!    will be returned to the debugger by check_child_status. */
! 
! static void
! add_pending_event (struct gdbserv_thread *thread, union wait waited)
! {
!   if (pending_events_top >= pending_events_listsize)
!     {
!       pending_events_listsize += 64;
!       pending_events = 
! 	realloc (pending_events, 
! 		 pending_events_listsize * sizeof (*pending_events));
!     }
!   pending_events [pending_events_top].thread = thread;
!   pending_events [pending_events_top].waited = waited;
!   pending_events [pending_events_top].selected = 0;
!   pending_events [pending_events_top].thread_db_event = 0;
!   pending_events_top ++;
! }
! 
! 
! /* Delete the I'th pending event.  This will reorder events at indices
!    I and higher, but not events whose indices are less than I.
! 
!    This function runs in constant time, so you can iterate through the
!    whole pending event pool by deleting events as you process them.
!    But the nice thing about this function is that you can also handle
!    only selected events, and leave others for later.  */
! static void
! delete_pending_event (int i)
! {
!   /* You shouldn't ask to delete an event that's not actually in the
!      list.  */
!   assert (0 <= i && i < pending_events_top);
  
!   /* Copy the last element down into this element's position, unless
!      this is the last element itself.  */
!   if (i < pending_events_top - 1)
!     pending_events[i] = pending_events[pending_events_top - 1];
! 
!   /* Now the deleted space is at the end of the array.  So just
!      decrement the top pointer, and we're done.  */
!   pending_events_top--;
  }
  
  
! /* Function: select_pending_event
!    Helper function for thread_db_check_child_state.
! 
!    Having collected a list of events from various threads, 
!    choose one "favored event" to be returned to the debugger.
! 
!    Return non-zero if we selected an event, or zero if we couldn't
!    find anything interesting to report.  */
! 
  
  static int
! select_pending_event (struct child_process *process)
! {
!   int i = 0;
!   int num_wifstopped_events = 0;
!   int random_key;
! 
!   /* Select the event that will be returned to the debugger. */
! 
!   /* Selection criterion #0:
!      If there are no events, don't do anything!  (paranoia) */
!   if (pending_events_top == 0)
!     {
!       if (thread_db_noisy)
! 	fprintf (stderr, "(selected nothing)\n");
!       return 0;
!     }
! 
!   /* Selection criterion #1: 
!      If the thread pointer is null, then the thread library is
!      not in play yet, so this is the only thread and the only event. */
!   if (pending_events[0].thread == NULL)
!     {
!       i = 0;
!       goto selected;
!     }
! 
!   /* Selection criterion #2:
!      Exit and terminate events take priority. */
!   for (i = 0; i < pending_events_top; i++)
!     if (WIFEXITED (pending_events[i].waited) ||
! 	WIFSIGNALED (pending_events[i].waited))
!       {
! 	goto selected;
!       }
! 
!   /* Selection criterion #3: 
!      Give priority to a stepping SIGTRAP. */
!   for (i = 0; i < pending_events_top; i++)
!     if (pending_events[i].thread->stepping &&
! 	WIFSTOPPED (pending_events[i].waited) &&
! 	WSTOPSIG (pending_events[i].waited) == SIGTRAP)
!       {
! 	/* We don't actually know whether this sigtrap was the result
! 	   of a singlestep, or of executing a trap instruction.  But
! 	   GDB has a better chance of figuring it out than we do. */
! 	goto selected;
!       }
! 
!   /* Selection criterion #4:
!      Count the WIFSTOPPED events and choose one at random. */
!   for (i = 0; i < pending_events_top; i++)
!     if (WIFSTOPPED (pending_events[i].waited))
!       num_wifstopped_events ++;
! 
!   random_key = (int) 
!     ((num_wifstopped_events * (double) rand ()) / (RAND_MAX + 1.0));
! 
!   for (i = pending_events_top - 1; i >= 0; i--)
!     if (WIFSTOPPED (pending_events[i].waited))
!       {
! 	if (random_key == --num_wifstopped_events)
! 	  {
! 	    goto selected;
! 	  }
! 	else if (WSTOPSIG (pending_events[i].waited) == SIGINT)
! 	  {
! 	    goto selected;	/* Give preference to SIGINT. */
! 	  }
!       }
! 
!   /* Selection criterion #4 (should never get here):
!      If all else fails, take the first event in the list. */
!   i = 0;
! 
!  selected:	/* Got our favored event. */
! 
!   if (thread_db_noisy)
!     fprintf (stderr, "(selected %s)\n",
! 	     thread_debug_name (pending_events[i].thread));
! 
!   pending_events[i].selected = 1;
!   process->event_thread = pending_events[i].thread;
!   if (pending_events[i].thread)
!     process->pid = pending_events[i].thread->ti.ti_lid;
! 
!   handle_waitstatus (process, pending_events[i].waited);
!   if (thread_db_noisy)
!     fprintf (stderr, "<select_pending_event: pid %d '%c' %d>\n",
! 	    process->pid, process->stop_status, process->stop_signal);
!   return 1;
! }
! 
! /* Function: send_pending_signals
!    Helper function for thread_db_check_child_state.
! 
!    When we call waitpid for each thread (trying to consume the SIGSTOP
!    events that we sent from stop_all_threads), we sometimes inadvertantly
!    get other events that we didn't send.  We pend these to a list, and 
!    then resend them to the child threads after our own SIGSTOP events
!    have been consumed. 
! 
!    Some events in the list require special treatment:
!     * One event is "selected" to be returned to the debugger. 
!       Skip that one.
!     * Trap events may represent breakpoints.  We can't just resend
!       the signal.  Instead we must arrange for the breakpoint to be
!       hit again when the thread resumes.  */
! 
! static void
! send_pending_signals (struct child_process *process)
! {
!   int i;
!   int signum;
! 
!   for (i = 0; i < pending_events_top; i++)
!     {
!       if (WIFSTOPPED (pending_events[i].waited) &&
! 	  ! pending_events[i].selected)
! 	{
! 	  signum = WSTOPSIG (pending_events[i].waited);
! 	  if (signum == SIGTRAP &&
! 	      pending_events[i].thread->stepping == 0)
! 	    {
! 	      /* Breakpoint.  Push it back.  */
! 	      if (thread_db_noisy)
! 		fprintf (stderr, "<send_pending_events: pushing back SIGTRAP for %d>\n",
! 			pending_events[i].thread->ti.ti_lid);
! 	      decr_pc_after_break (process->serv,
! 	                           pending_events[i].thread->ti.ti_lid);
! 	    }
! 	  else /* FIXME we're letting SIGINT go thru as normal */
! 	    {
! 	      /* Put the signal back into the child's queue. */
! 	      kill (pending_events[i].thread->ti.ti_lid, 
! 		    WSTOPSIG (pending_events[i].waited));
! 	    }
! 	}
!     }
!   pending_events_top = 0;
! }
! 
! /* Function: wait_all_threads
!    Use waitpid to close the loop on all threads that have been
!    attached or SIGSTOP'd.  Skip the eventpid -- it's already been waited. 
! 
!    Special considerations:
!      The debug signal does not go into the event queue, 
!      does not get forwarded to the thread etc. */
! 
! static void
! wait_all_threads (struct child_process *process)
  {
!   struct gdbserv_thread *thread;
!   union  wait w;
!   int    ret, stopsig;
! 
!   for (thread = first_thread_in_list ();
!        thread;
!        thread = next_thread_in_list (thread))
!     {
!       /* Special handling for the thread that has already been waited. */
!       if (thread->ti.ti_lid == process->pid)
! 	{
! 	  /* HACK mark him waited. */
! 	  thread->waited = 1;
! 	  continue;
! 	}
! 
!       while ((thread->stopped || thread->attached) &&
! 	     !thread->waited)
! 	{
! 	  errno = 0;
! 	  if (thread_db_noisy)
! 	    fprintf (stderr, "(waiting for %s)\n",
! 		     thread_debug_name (thread));
! 	  ret = waitpid (thread->ti.ti_lid, (int *) &w, 
! 			 thread->ti.ti_lid == proc_handle.pid ? 0 : __WCLONE);
! 	  if (ret == -1)
! 	    {
! 	      if (errno == ECHILD)
! 		fprintf (stderr, "<wait_all_threads: %d has disappeared>\n", 
! 			 thread->ti.ti_lid);
! 	      else
! 		fprintf (stderr, "<wait_all_threads: waitpid %d failed, '%s'>\n", 
! 			 thread->ti.ti_lid, strerror (errno));
! 	      break;
! 	    }
! 	  if (WIFEXITED (w))
! 	    {
! 	      add_pending_event (thread, w);
! 	      fprintf (stderr, "<wait_all_threads: %d has exited>\n", 
! 		       thread->ti.ti_lid);
! 	      break;
! 	    }
! 	  if (WIFSIGNALED (w))
! 	    {
! 	      add_pending_event (thread, w);
! 	      fprintf (stderr, "<wait_all_threads: %d died with signal %d>\n", 
! 		       thread->ti.ti_lid, WTERMSIG (w));
! 	      break;
! 	    }
! 	  stopsig = WSTOPSIG (w);
! 	  switch (stopsig) {
! 	  case SIGSTOP:
! 	    /* This is the one we're looking for.
! 	       Mark the thread as 'waited' and move on to the next thread. */
! #if 0 /* too noisy! */
! 	    if (thread_db_noisy)
! 	      fprintf (stderr, "<waitpid (%d, SIGSTOP)>\n", thread->ti.ti_lid);
! #endif
! 	      thread->waited = 1;
! 	    break;
! 	  default:
! 	    if (stopsig == debug_signal)
! 	      {
! 		/* This signal does not need to be forwarded. */
! 		if (thread_db_noisy)
! 		  fprintf (stderr, "<wait_all_threads: ignoring SIGDEBUG (%d) for %d>\n",
! 			   debug_signal,
! 			   thread->ti.ti_lid);
! 	      }
! 	    else
! 	      {
! 		if (thread_db_noisy)
! 		  fprintf (stderr, "<wait_all_threads: stash sig %d for %d at 0x%08lx>\n",
! 			   stopsig, thread->ti.ti_lid,
! 			   (unsigned long) debug_get_pc (process->serv,
! 							 thread->ti.ti_lid));
! 		add_pending_event (thread, w);
! 	      }
! 	  }
! 
! 	  if (!thread->waited)	/* Signal was something other than STOP. */
! 	    {
! 	      /* Continue the thread so it can stop on the next signal. */
! 	      continue_lwp (thread->ti.ti_lid, 0);
! 	    }
! 	}
!     }
! }
! 
  
! /* Scan the list for threads that have stopped at libthread_db event
!    breakpoints, process the events they're reporting, and step the
!    threads past the breakpoints, updating the pending_events
!    table.
  
!    This function assumes that all threads have been stopped.  */
! static void
! handle_thread_db_events (struct child_process *process)
! {
!   struct gdbserv *serv = process->serv;
!   int i;
!   int any_events;
  
!   /* Are there any threads at all stopped at libthread_db event
!      breakpoints?  */
!   any_events = 0;
!   for (i = 0; i < pending_events_top; i++)
!     {
!       struct event_list *e = &pending_events[i];
!       if (e->thread
! 	  && WIFSTOPPED (e->waited)
! 	  && WSTOPSIG (e->waited) == SIGTRAP
! 	  && hit_thread_db_event_breakpoint (serv, e->thread))
! 	{
! 	  any_events = 1;
! 	  e->thread_db_event = 1;
! 	}
!     }
  
!   if (! any_events)
!     return;
  
!   /* Consume events.  */
    for (;;)
      {
        td_event_msg_t msg;
--- 1673,1724 ----
  {
    char *info = malloc (128);
  
!   /* When a thread's LWP has exited, NPTL reports its ti_lid as
!      being equal to that of the main process.  Which is a little
!      confusing.  So print the pid in a helpfully detailed way.  */
!   sprintf (info, "Type %s State %s PID %d%s",
  	   thread_db_type_str (thread->ti.ti_type),
! 	   thread_db_state_str (thread->ti.ti_state),
! 	   thread->ti.ti_lid,
! 	   (thread->ti.ti_lid == proc_handle.pid ? " (main)" : ""));
  
!   return info;
  }
  
  
! /* If we are using the libthread_db event interface, and PROCESS is
!    stopped at an event breakpoint, handle the event.
  
+    If we've taken care of PROCESS's situation and it needs no further
+    attention, return non-zero.  If PROCESS still needs attention (say,
+    because we're not using the event interface, or PROCESS didn't in
+    fact hit an event breakpoint, or it did but had new interesting
+    things happen when we tried to single-step it), return zero.  */
  static int
! handle_thread_db_event (struct child_process *process)
  {
!   struct gdbserv *serv = process->serv;
!   struct gdbserv_thread *thread = process->event_thread;
!   lwpid_t lwp;
!   union wait w;
  
!   /* We need to be actually using the event interface.  */
!   if (! using_thread_db_events)
!     return 0;
  
!   /* We need a thread to work on.  */
!   if (! thread)
!     return 0;
  
!   /* It needs to be stopped at an event breakpoint.  */
!   if (! (process->stop_status == 'T'
! 	 && process->stop_signal == SIGTRAP
! 	 && hit_thread_db_event_breakpoint (serv, thread)))
!     return 0;
  
!   lwp = thread->ti.ti_lid;
  
!   /* Consume events from the queue.  */
    for (;;)
      {
        td_event_msg_t msg;
***************
*** 2034,2102 ****
  	}
  
        /* The only messages we're concerned with are TD_CREATE and
! 	 TD_DEATH.  But since we call update_thread_list every time
! 	 thread_db_check_child_state gets a wait status from waitpid,
! 	 our list is always up to date, so we don't actually need to
! 	 do anything with these messages.
  
  	 (Ignore the question, for now, of how RDA loses when threads
  	 spawn off new threads after we've updated our list, but
! 	 before we've managed to send each of the threads on our list
! 	 a SIGSTOP.)  */
!     }
  
!   /* Disable the event breakpoints while we step the threads across
!      them.  */
!   delete_thread_db_event_breakpoints (serv);
! 
!   for (i = 0; i < pending_events_top;)
!     {
!       struct event_list *e = &pending_events[i];
!       if (e->thread_db_event)
  	{
! 	  struct gdbserv_thread *thread = e->thread;
! 	  lwpid_t lwp = thread->ti.ti_lid;
! 	  union wait w;
  
! 	  /* Delete this pending event.  If appropriate, we'll add a
! 	     new pending event below, but if stepping across the event
! 	     breakpoint is successful, then this pending event, at
! 	     least, has been addressed.  */
! 	  delete_pending_event (i);
  
! 	  /* Back up the thread, if needed.  */
! 	  decr_pc_after_break (serv, lwp);
  
! 	  /* Single-step the thread across the breakpoint.  */
! 	  singlestep_lwp (serv, lwp, 0);
  
! 	  /* Get a new status for that thread.  */
! 	  if (thread_db_noisy)
! 	    fprintf (stderr, "(waiting after event bp step %s)\n",
! 		     thread_debug_name (thread));
! 	  if (waitpid (lwp, (int *) &w, lwp == proc_handle.pid ? 0 : __WCLONE)
! 	      < 0)
! 	    fprintf (stderr, "error waiting for thread %d after "
! 		     "stepping over event breakpoint:\n%s",
! 		     lwp, strerror (errno));
! 	  else
! 	    {
! 	      /* If the result is a SIGTRAP signal, then that means
! 		 the single-step proceeded normally.  Otherwise, it's
! 		 a new pending event.  */
! 	      if (WIFSTOPPED (w)
! 		  && WSTOPSIG (w) == SIGTRAP)
! 		;
! 	      else
! 		add_pending_event (thread, w);
! 	    }
! 	}
!       else
! 	i++;
      }
  
    /* Re-insert the event breakpoints.  */
    insert_thread_db_event_breakpoints (serv);
  }
  
  
--- 1735,1813 ----
  	}
  
        /* The only messages we're concerned with are TD_CREATE and
! 	 TD_DEATH.
! 
! 	 Every time thread_db_check_child_state gets a wait status
! 	 from waitpid, we call update_thread_list, so our list is
! 	 always up to date; we don't actually need to do anything with
! 	 these messages for our own sake.
! 
! 	 However, the LWP pool module needs to be told when threads
! 	 are about to exit, since NPTL gives no kernel-level
! 	 indication of this.  Threads just disappear.
  
  	 (Ignore the question, for now, of how RDA loses when threads
  	 spawn off new threads after we've updated our list, but
! 	 before we've managed to send each of the LWP's a
! 	 SIGSTOP.)  */
  
!       if (msg.event == TD_DEATH)
  	{
! 	  td_thrinfo_t ti;
! 	  
! 	  status = td_thr_get_info_p (msg.th_p, &ti);
! 	  if (status != TD_OK)
! 	    {
! 	      fprintf (stderr, 
! 		       "error getting thread info on dying thread: %s\n",
! 		       thread_db_err_str (status));
! 	      break;
! 	    }
  
! 	  /* Tell the LWP pool code that this thread's death has been
! 	     foretold.  */
! 	  lwp_pool_thread_db_death_event ((pid_t) ti.ti_lid);
! 	}
!     }
  
!   /* Disable the event breakpoints while we step the thread across them.  */
!   delete_thread_db_event_breakpoints (serv);
  
!   /* Back up the thread, if needed.  */
!   decr_pc_after_break (serv, lwp);
  
!   /* Single-step the thread across the breakpoint.  */
!   lwp_pool_singlestep_lwp (serv, lwp, 0);
! 
!   /* Get a new status for that thread.  */
!   if (thread_db_noisy)
!     fprintf (stderr, "(waiting after event bp step %s)\n",
! 	     thread_debug_name (thread));
!   if (lwp_pool_waitpid (lwp, (int *) &w, 0) < 0)
!     {
!       fprintf (stderr, "error waiting for thread %d after "
! 	       "stepping over event breakpoint:\n%s",
! 	       lwp, strerror (errno));
!       /* We don't have any new status to report...  */
!       return 1;
      }
  
+   /* Tell the LWP pool that this thread has notified RDA of an event.  */
+   lwp_pool_thread_db_death_notified (lwp);
+ 
    /* Re-insert the event breakpoints.  */
    insert_thread_db_event_breakpoints (serv);
+ 
+   /* If the wait status is a SIGTRAP signal, then that means the
+      single-step proceeded normally.  Otherwise, it's a new event we
+      should deal with.  */
+   if (WIFSTOPPED (w) && WSTOPSIG (w) == SIGTRAP)
+     return 1;
+   else
+     {
+       handle_waitstatus (process, w);
+       return 0;
+     }
  }
  
  
***************
*** 2108,2144 ****
  {
    thread_db_flush_regset_caches();
  
-   /* Continue thread only if (a) it was just attached, or 
-      (b) we stopped it and waited for it. */
    if (thread->ti.ti_lid != 0)
!     if (thread->attached || (thread->stopped && thread->waited))
!       {
! 	continue_lwp (thread->ti.ti_lid, signal);
! 	thread->stopped = thread->attached = thread->waited = 0;
!       }
!   thread_db_invalidate_caches ();
! }
! 
! /* Function: continue_all_threads 
!    Send continue to all stopped or attached threads
!    except the event thread (which will be continued separately). */
! 
! static void
! continue_all_threads (struct gdbserv *serv)
! {
!   struct gdbserv_thread *thread;
  
!   for (thread = first_thread_in_list ();
!        thread;
!        thread = next_thread_in_list (thread))
!     {
!       /* If we're using signals to communicate with the thread
! 	 library, send any newly attached thread the restart signal. */
!       if (got_thread_signals && thread->attached)
! 	continue_thread (thread, restart_signal);
!       else
! 	continue_thread (thread, 0);
!     }
  }
  
  /* Function: continue_program
--- 1819,1828 ----
  {
    thread_db_flush_regset_caches();
  
    if (thread->ti.ti_lid != 0)
!     lwp_pool_continue_lwp (thread->ti.ti_lid, signal);
  
!   thread_db_invalidate_caches ();
  }
  
  /* Function: continue_program
***************
*** 2154,2170 ****
  
    /* First resume the event thread. */
    if (process->event_thread)
!     continue_thread (process->event_thread, process->signal_to_send);
    else
!     continue_lwp (process->pid, process->signal_to_send);
  
    process->stop_signal = process->stop_status = 
      process->signal_to_send = 0;
  
    /* Then resume everyone else. */
!   continue_all_threads (serv);
    process->running = 1;
    thread_db_invalidate_caches ();
  }
  
  /* Function: singlestep_thread
--- 1838,1856 ----
  
    /* First resume the event thread. */
    if (process->event_thread)
!       continue_thread (process->event_thread, process->signal_to_send);
    else
!     lwp_pool_continue_lwp (process->pid, process->signal_to_send);
  
    process->stop_signal = process->stop_status = 
      process->signal_to_send = 0;
  
    /* Then resume everyone else. */
!   lwp_pool_continue_all ();
    process->running = 1;
    thread_db_invalidate_caches ();
+ 
+   process->focus_thread = NULL;
  }
  
  /* Function: singlestep_thread
***************
*** 2175,2183 ****
                     struct gdbserv_thread *thread,
                     int signal)
  {
!   singlestep_lwp (serv, thread->ti.ti_lid, signal);
!   thread->stopped = thread->attached = thread->waited = 0;
!   thread->stepping = 1;
  }
  
  /* Function: singlestep_program
--- 1861,1867 ----
                     struct gdbserv_thread *thread,
                     int signal)
  {
!   lwp_pool_singlestep_lwp (serv, thread->ti.ti_lid, signal);
  }
  
  /* Function: singlestep_program
***************
*** 2196,2210 ****
    if (process->event_thread)
      singlestep_thread (serv, process->event_thread, process->signal_to_send);
    else
!     singlestep_lwp (serv, process->pid, process->signal_to_send);
  
    process->stop_status = process->stop_signal =
      process->signal_to_send = 0;
  
    /* Then resume everyone else. */
!   continue_all_threads (serv);		/* All but the event thread. */
    process->running = 1;
    thread_db_invalidate_caches ();
  }
  
  /* Function: thread_db_continue_thread
--- 1880,1896 ----
    if (process->event_thread)
      singlestep_thread (serv, process->event_thread, process->signal_to_send);
    else
!     lwp_pool_singlestep_lwp (serv, process->pid, process->signal_to_send);
  
    process->stop_status = process->stop_signal =
      process->signal_to_send = 0;
  
    /* Then resume everyone else. */
!   lwp_pool_continue_all ();
    process->running = 1;
    thread_db_invalidate_caches ();
+ 
+   process->focus_thread = NULL;
  }
  
  /* Function: thread_db_continue_thread
***************
*** 2240,2245 ****
--- 1926,1936 ----
        process->running = 1;
      }
    thread_db_invalidate_caches ();
+ 
+   /* If we continued a particular thread, then collect wait statuses
+      for that thread only.  Otherwise, look for events from
+      everyone.  */
+   process->focus_thread = thread;
  }
  
  /* Function: singlestep_thread
***************
*** 2274,2279 ****
--- 1965,1974 ----
        process->running = 1;
      }
    thread_db_invalidate_caches ();
+ 
+   /* If we stepped a particular thread, then collect wait statuses for
+      that thread only.  Otherwise, look for events from everyone.  */
+   process->focus_thread = thread;
  }
  
  /* Function: exit_program
***************
*** 2323,2339 ****
  
    if (process->running)
      {
!       eventpid = waitpid (-1, (int *) &w, WNOHANG);
!       /* If no event on main thread, check clone threads. 
!          It doesn't matter what event we find first, since we now have
!          a fair algorithm for choosing which event to handle next. */
!       if (eventpid <= 0)
! 	eventpid = waitpid (-1, (int *) &w, WNOHANG | __WCLONE);
  
        if (eventpid > 0)	/* found an event */
  	{
- 	  int selected_anything;
- 
  	  /* Allow underlying target to use the event process by default,
  	     since it is stopped and the others are still running. */
  	  process->pid = eventpid;
--- 2018,2036 ----
  
    if (process->running)
      {
!       eventpid = -1;
! 
!       /* If we only stepped or continued a single thread, check for
! 	 status results only from that thread, even though there may
! 	 be others collected from before.  */
!       if (process->focus_thread)
! 	eventpid = lwp_pool_waitpid (process->focus_thread->ti.ti_lid,
! 				     (int *) &w, WNOHANG);
!       else
! 	eventpid = lwp_pool_waitpid (-1, (int *) &w, WNOHANG);
  
        if (eventpid > 0)	/* found an event */
  	{
  	  /* Allow underlying target to use the event process by default,
  	     since it is stopped and the others are still running. */
  	  process->pid = eventpid;
***************
*** 2353,2406 ****
  		return 0;	/* Just a thread exit, don't tell GDB. */
  	    }
  
! 	  /* FIXME: this debugging output will be removed soon, but 
! 	     putting it here before the update_thread_list etc. is
! 	     bad from the point of view of synchronization. */
! 	  handle_waitstatus (process, w);
  	  if (thread_db_noisy)
  	    fprintf (stderr,
  		     "\n<check_child_state: %d got '%c' - %d at 0x%08lx>\n", 
! 		     process->pid, process->stop_status, process->stop_signal,
  		     (unsigned long) debug_get_pc (process->serv, process->pid));
- 	  /* It shouldn't hurt to call this twice.  But if there are a
- 	     lot of other threads running, it can take a *long* time
- 	     for the thread list update to complete.  */
- 	  stop_all_threads (process);
  
  	  /* Update the thread list. */
! 	  update_thread_list ();
  
  	  /* For now, call get_thread_signals from here (FIXME:) */
  	  get_thread_signals ();
  
! 	  /* Put this child's event into the pending list. */
! 	  add_pending_event (thread_list_lookup_by_lid ((lwpid_t) eventpid), 
! 			     w);
  
! 	  stop_all_threads (process);
! 	  wait_all_threads (process);
! 	  if (using_thread_db_events)
! 	    handle_thread_db_events (process);
! 	  selected_anything = select_pending_event (process);
! 	  send_pending_signals (process);
! 
! 	  /* If there weren't any pending events to report, then
! 	     continue the program, and let the main loop know that
! 	     nothing interesting happened.  */
! 	  if (! selected_anything)
  	    {
  	      currentvec->continue_program (serv);
  	      return 0;
  	    }
  
! 	  /* Note: if more than one thread has an event ready to be
! 	     handled, wait_all_threads will have chosen one at random. */
! 
  	  if (got_thread_signals && ignore_thread_signal (process))
  	    {
  	      /* Ignore this signal, restart the child. */
  	      if (thread_db_noisy)
! 		fprintf (stderr, "<check_child_state: ignoring signal %d for %d>\n",
  			 process->stop_signal, process->pid);
  	      if (process->stop_signal == debug_signal)
  		{
--- 2050,2097 ----
  		return 0;	/* Just a thread exit, don't tell GDB. */
  	    }
  
! 	  /* It doesn't hurt to call this twice.  But if there are a
! 	     lot of other threads running, then RDA is competing with
! 	     them for time slices and it can take a long time for the
! 	     thread list update to complete.  */
! 	  lwp_pool_stop_all ();
! 
  	  if (thread_db_noisy)
  	    fprintf (stderr,
  		     "\n<check_child_state: %d got '%c' - %d at 0x%08lx>\n", 
! 		     process->pid,
! 		     process->stop_status,
! 		     process->stop_signal,
  		     (unsigned long) debug_get_pc (process->serv, process->pid));
  
  	  /* Update the thread list. */
! 	  update_thread_list (process);
! 
! 	  process->event_thread = thread_list_lookup_by_lid (process->pid);
  
  	  /* For now, call get_thread_signals from here (FIXME:) */
  	  get_thread_signals ();
  
! 	  /* Stop any new threads we've recognized.  */
! 	  lwp_pool_stop_all ();
  
! 	  /* If we're using the thread_db event interface, and this is
! 	     a thread_db event, then just handle it silently and
! 	     continue.  */
! 	  if (handle_thread_db_event (process))
  	    {
  	      currentvec->continue_program (serv);
  	      return 0;
  	    }
  
! 	  /* If we're using the signal-based interface, and someone
! 	     got a thread-related signal, then deal with that.  */
  	  if (got_thread_signals && ignore_thread_signal (process))
  	    {
  	      /* Ignore this signal, restart the child. */
  	      if (thread_db_noisy)
! 		fprintf (stderr, 
! 			 "<check_child_state: ignoring signal %d for %d>\n",
  			 process->stop_signal, process->pid);
  	      if (process->stop_signal == debug_signal)
  		{
***************
*** 2429,2451 ****
  	      return 0;
  	    }
  
- 	  if (process->stop_status == 'W')
- 	    {
- 	      if (process->pid == proc_handle.pid)
- 		return 1;	/* Main thread exited! */
- 	      else
- 		{
- 		  currentvec->continue_program (serv);
- 		  return 0;	/* Just a thread exit, don't tell GDB. */
- 		}
- 	    }
- 
  	  process->running = 0;
  
- 	  /* This is the place to cancel its 'stepping' flag. */
- 	  if (process && process->event_thread)
- 	    process->event_thread->stepping = 0;
- 
  	  /* Pass this event back to GDB. */
  	  if (process->debug_backend)
  	    fprintf (stderr, "wait returned '%c' (%d) for %d.\n", 
--- 2120,2127 ----
***************
*** 2863,2868 ****
--- 2539,2550 ----
    gdbserver.fromtarget_break = thread_db_fromtarget_thread_break;
    /* FIXME what about terminate and exit? */
  
+   /* Record the initial thread's pid in the LWP pool.  */
+   lwp_pool_new_stopped (process->pid);
+ 
+   /* Initially, there is no focus thread.  */
+   process->focus_thread = NULL;
+ 
    /* Set up the regset caches.  */
    initialize_regset_caches ();
    return 0;		/* success */
Index: rda/unix/gdbserv-thread-db.h
===================================================================
RCS file: /cvs/src/src/rda/unix/gdbserv-thread-db.h,v
retrieving revision 1.2.2.1
diff -c -r1.2.2.1 gdbserv-thread-db.h
*** rda/unix/gdbserv-thread-db.h	29 Oct 2004 23:38:02 -0000	1.2.2.1
--- rda/unix/gdbserv-thread-db.h	23 Nov 2004 05:52:21 -0000
***************
*** 83,111 ****
  		             int regno, 
  		             const void *xregset);
  
- /* Resume a stopped LWP. */
- extern int continue_lwp (lwpid_t lid, int signal);
- 
- /* Step a stopped LWP. */
- extern int singlestep_lwp (struct gdbserv *serv, lwpid_t lid, int signal);
- 
  /* Software singlestep for mips.  */
  #if defined (MIPS_LINUX_TARGET) || defined (MIPS64_LINUX_TARGET)
  extern int mips_singlestep (struct gdbserv *serv, pid_t pid, int sig);
  #endif
  
- /* Attach an LWP. */
- extern int attach_lwp (lwpid_t lid);
- 
  /* Fetch the value of PC for debugging purposes.  */
  extern unsigned long debug_get_pc (struct gdbserv *serv, pid_t pid);
  
  /* Adjust PC value after trap has been hit.  */
  extern int decr_pc_after_break (struct gdbserv *serv, pid_t pid);
  
- /* Send SIGSTOP to an LWP.  */
- extern int stop_lwp (lwpid_t lwpid);
- 
  struct child_process;
  extern int handle_waitstatus (struct child_process *process, union wait w);
  
--- 83,99 ----
Index: rda/unix/lwp-ctrl.h
===================================================================
RCS file: rda/unix/lwp-ctrl.h
diff -N rda/unix/lwp-ctrl.h
*** rda/unix/lwp-ctrl.h	1 Jan 1970 00:00:00 -0000
--- rda/unix/lwp-ctrl.h	23 Nov 2004 05:52:21 -0000
***************
*** 0 ****
--- 1,61 ----
+ /* lwp-ctrl.h --- interface to functions for controlling LWP's
+ 
+    Copyright 2004 Red Hat, Inc.
+ 
+    This file is part of RDA, the Red Hat Debug Agent (and library).
+ 
+    This program is free software; you can redistribute it and/or modify
+    it under the terms of the GNU General Public License as published by
+    the Free Software Foundation; either version 2 of the License, or
+    (at your option) any later version.
+ 
+    This program is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+    GNU General Public License for more details.
+ 
+    You should have received a copy of the GNU General Public License
+    along with this program; if not, write to the Free Software
+    Foundation, Inc., 59 Temple Place - Suite 330,
+    Boston, MA 02111-1307, USA.
+    
+    Alternative licenses for RDA may be arranged by contacting Red Hat,
+    Inc.  */
+ 
+ #ifndef RDA_UNIX_LWP_CTRL_H
+ #define RDA_UNIX_LWP_CTRL_H
+ 
+ #include <sys/types.h>
+ 
+ struct gdbserv;
+ 
+ /* Do a PTRACE_ATTACH on LWP.  Do not wait for the resulting wait
+    status.  On success, return zero; on failure, print a message, set
+    errno, and return -1.  */
+ int attach_lwp (pid_t lwp);
+ 
+ /* Singlestep the stopped lwp LWP, which is managed by SERV.  If LWP
+    successfully completes the next instruction, it will receive a
+    SIGTRAP signal.  If SIGNAL is non-zero, send it to LWP.  Do not
+    wait for the resulting wait status.  On success, return zero; on
+    failure, print a message, set errno, and return -1.  */
+ int singlestep_lwp (struct gdbserv *serv, pid_t lwp, int signal);
+ 
+ /* Continue the stopped lwp LWP.  If SIGNAL is non-zero, send it to
+    LWP.  Do not wait for the resulting wait status.  On success,
+    return zero; on failure, print a message, set errno, and return
+    -1.  */
+ int continue_lwp (pid_t lwp, int signal);
+ 
+ /* Send LWP the signal SIGNAL.  Do not wait for the resulting wait
+    status.  On success, return zero; on failure, print a message, set
+    errno, and return -1.  
+ 
+    If possible, this uses the 'tkill' system call to ensure the signal
+    is sent to that exact LWP, and not distributed to whatever thread
+    in the process is ready to handle it (as POSIX says ordinary 'kill'
+    must do).  */
+ int kill_lwp (pid_t lwp, int signal);
+ 
+ 
+ #endif /* RDA_UNIX_LWP_CTRL_H */
Index: rda/unix/ptrace-target.c
===================================================================
RCS file: /cvs/src/src/rda/unix/ptrace-target.c,v
retrieving revision 1.7.2.4
diff -c -r1.7.2.4 ptrace-target.c
*** rda/unix/ptrace-target.c	1 Nov 2004 21:43:51 -0000	1.7.2.4
--- rda/unix/ptrace-target.c	23 Nov 2004 05:52:21 -0000
***************
*** 24,29 ****
--- 24,31 ----
  
  #include "config.h"
  
+ #define _GNU_SOURCE
+ 
  #include <stdio.h>
  #include <assert.h>
  #include <stdlib.h>
***************
*** 44,49 ****
--- 46,52 ----
  
  #include "server.h"
  #include "ptrace-target.h"
+ #include "lwp-ctrl.h"
  /* This is unix ptrace gdbserv target that uses the RDA library to implement
     a remote gdbserver on a unix ptrace host.  It controls the process
     to be debugged on the linux host, allowing GDB to pull the strings
***************
*** 1264,1353 ****
    return 0;
  }
  
! /* Exported service functions */
! 
! /* Function: continue_lwp
!    Send PTRACE_CONT to an lwp. 
!    Returns -1 for failure, zero for success. */
  
! extern int
! continue_lwp (lwpid_t lwpid, int signal)
  {
    if (thread_db_noisy)
!     fprintf (stderr, "<ptrace (PTRACE_CONT, %d, 0, %d)>\n", lwpid, signal);
  
!   if (ptrace (PTRACE_CONT, lwpid, 0, signal) < 0)
      {
!       fprintf (stderr, "<<< ERROR: PTRACE_CONT %d failed >>>\n", lwpid);
        return -1;
      }
    return 0;
  }
  
- /* Function: singlestep_lwp
-    Send PTRACE_SINGLESTEP to an lwp.
-    Returns -1 for failure, zero for success. */
- 
  int
! singlestep_lwp (struct gdbserv *serv, lwpid_t lwpid, int signal)
  {
  
  #if defined (MIPS_LINUX_TARGET) || defined (MIPS64_LINUX_TARGET)
    {
      if (thread_db_noisy)
!       fprintf (stderr, "<singlestep_lwp lwpid=%d signal=%d>\n", lwpid, signal);
!     mips_singlestep (serv, lwpid, signal);
      return 0;
    }
  #else
    if (thread_db_noisy)
!     fprintf (stderr, "<ptrace (PTRACE_SINGLESTEP, %d, 0, %d)>\n", lwpid, signal);
  
!   if (ptrace (PTRACE_SINGLESTEP, lwpid, 0, signal) < 0)
      {
!       fprintf (stderr, "<<< ERROR: PTRACE_SINGLESTEP %d failed >>>\n", lwpid);
        return -1;
      }
  #endif
    return 0;
  }
  
! /* Function: attach_lwp
!    Send PTRACE_ATTACH to an lwp.
!    Returns -1 for failure, zero for success. */
! 
! extern int
! attach_lwp (lwpid_t lwpid)
  {
    errno = 0;
!   if (ptrace (PTRACE_ATTACH, lwpid, 0, 0) == 0)
      {
        if (thread_db_noisy)
! 	fprintf (stderr, "<ptrace (PTRACE_ATTACH, %d, 0, 0)>\n", lwpid);
        return 0;
      }
    else
      {
        fprintf (stderr, "<<< ERROR ptrace attach %d failed, %s >>>\n",
! 	       lwpid, strerror (errno));
        return -1;
      }
  }
  
  
! /* Function: stop_lwp
!    Use SIGSTOP to force an lwp to stop. 
!    Returns -1 for failure, zero for success. */
! 
! extern int
! stop_lwp (lwpid_t lwpid)
  {
    int result;
  
    /* Under NPTL, signals sent via kill get delivered to whatever
       thread in the group can handle them; they don't necessarily go to
       the thread whose PID you passed.  This makes kill useless for
!      stop_lwp's purposes: it's trying to stop a particular thread.
  
       The tkill system call lets you direct a signal at a particular
       thread.  Use that if it's available (as it is on all systems
--- 1267,1351 ----
    return 0;
  }
  
! /* Exported service functions; see "lwp-ctrl.h".  */
  
! int
! continue_lwp (pid_t lwp, int signal)
  {
    if (thread_db_noisy)
!     fprintf (stderr, "<ptrace (PTRACE_CONT, %d, 0, %d)>\n", lwp, signal);
  
!   if (ptrace (PTRACE_CONT, lwp, 0, signal) < 0)
      {
!       fprintf (stderr, "<<< ERROR: PTRACE_CONT %d failed: %s >>>\n", 
! 	       lwp, strerror (errno));
        return -1;
      }
    return 0;
  }
  
  int
! singlestep_lwp (struct gdbserv *serv, pid_t lwp, int signal)
  {
  
  #if defined (MIPS_LINUX_TARGET) || defined (MIPS64_LINUX_TARGET)
    {
      if (thread_db_noisy)
!       fprintf (stderr, "<singlestep_lwp lwp=%d signal=%d>\n", lwp, signal);
!     mips_singlestep (serv, lwp, signal);
      return 0;
    }
  #else
    if (thread_db_noisy)
!     fprintf (stderr, "<ptrace (PTRACE_SINGLESTEP, %d, 0, %d)>\n", lwp, signal);
  
!   if (ptrace (PTRACE_SINGLESTEP, lwp, 0, signal) < 0)
      {
!       int saved_errno = errno;
! 
!       fprintf (stderr, "<<< ERROR: PTRACE_SINGLESTEP %d failed: %s >>>\n",
! 	       lwp, strerror (errno));
!       
!       errno = saved_errno;
        return -1;
      }
  #endif
    return 0;
  }
  
! int
! attach_lwp (pid_t lwp)
  {
    errno = 0;
!   if (ptrace (PTRACE_ATTACH, lwp, 0, 0) == 0)
      {
        if (thread_db_noisy)
! 	fprintf (stderr, "<ptrace (PTRACE_ATTACH, %d, 0, 0)>\n", lwp);
        return 0;
      }
    else
      {
+       int saved_errno = errno;
+ 
        fprintf (stderr, "<<< ERROR ptrace attach %d failed, %s >>>\n",
! 	       lwp, strerror (errno));
! 
!       errno = saved_errno;
        return -1;
      }
  }
  
  
! int
! kill_lwp (pid_t lwp, int signal)
  {
    int result;
  
    /* Under NPTL, signals sent via kill get delivered to whatever
       thread in the group can handle them; they don't necessarily go to
       the thread whose PID you passed.  This makes kill useless for
!      kill_lwp's purposes: it's trying to send a signal to a particular
!      thread.
  
       The tkill system call lets you direct a signal at a particular
       thread.  Use that if it's available (as it is on all systems
***************
*** 1361,1367 ****
      if (could_have_tkill)
        {
  	errno = 0;
! 	result = syscall (SYS_tkill, lwpid, SIGSTOP);
  	if (errno == 0)
  	  return result;
  	else if (errno == ENOSYS)
--- 1359,1365 ----
      if (could_have_tkill)
        {
  	errno = 0;
! 	result = syscall (SYS_tkill, lwp, signal);
  	if (errno == 0)
  	  return result;
  	else if (errno == ENOSYS)
***************
*** 1369,1386 ****
  	  could_have_tkill = 0;
  	else
  	  {
! 	    fprintf (stderr, "<<< ERROR -- tkill (%d, SIGSTOP) failed >>>\n",
! 		     lwpid);
  	    return -1;
  	  }
        }
    }
  #endif
  
!   result = kill (lwpid, SIGSTOP);
    if (result != 0)
      {
!       fprintf (stderr, "<<< ERROR -- kill (%d, SIGSTOP) failed >>>\n", lwpid);
        return -1;
      }
  
--- 1367,1394 ----
  	  could_have_tkill = 0;
  	else
  	  {
! 	    int saved_errno = errno;
! 
! 	    fprintf (stderr,
! 		     "<<< ERROR -- tkill (%d, %s) failed: %s >>>\n",
! 		     lwp, strsignal (signal), strerror (errno));
! 
! 	    errno = saved_errno;
  	    return -1;
  	  }
        }
    }
  #endif
  
!   result = kill (lwp, signal);
    if (result != 0)
      {
!       int saved_errno = errno;
! 
!       fprintf (stderr, "<<< ERROR -- kill (%d, %s) failed >>>\n", 
! 	       lwp, strsignal (signal));
! 
!       errno = saved_errno;
        return -1;
      }
  
Index: rda/unix/configure.in
===================================================================
RCS file: /cvs/src/src/rda/unix/configure.in,v
retrieving revision 1.6.2.1
diff -c -r1.6.2.1 configure.in
*** rda/unix/configure.in	26 Oct 2004 23:04:44 -0000	1.6.2.1
--- rda/unix/configure.in	23 Nov 2004 05:52:21 -0000
***************
*** 28,34 ****
  
  case "$target" in
    mips64*linux*)
!     TARGET_MODULES="linux-target.o thread-db.o ptrace-target.o" 
      AC_DEFINE(LINUX_TARGET)
      AC_DEFINE(GREGSET_T,  prgregset_t)
      AC_DEFINE(FPREGSET_T, prfpregset_t)
--- 28,34 ----
  
  case "$target" in
    mips64*linux*)
!     TARGET_MODULES="linux-target.o thread-db.o lwp-pool.o ptrace-target.o" 
      AC_DEFINE(LINUX_TARGET)
      AC_DEFINE(GREGSET_T,  prgregset_t)
      AC_DEFINE(FPREGSET_T, prfpregset_t)
***************
*** 42,48 ****
    arm*linux* | \
    mips*linux* | \
    frv*linux*)
!     TARGET_MODULES="linux-target.o thread-db.o ptrace-target.o" 
      AC_DEFINE(LINUX_TARGET)
      AC_DEFINE(GREGSET_T,  prgregset_t)
      AC_DEFINE(FPREGSET_T, prfpregset_t)
--- 42,48 ----
    arm*linux* | \
    mips*linux* | \
    frv*linux*)
!     TARGET_MODULES="linux-target.o thread-db.o lwp-pool.o ptrace-target.o" 
      AC_DEFINE(LINUX_TARGET)
      AC_DEFINE(GREGSET_T,  prgregset_t)
      AC_DEFINE(FPREGSET_T, prfpregset_t)
Index: rda/unix/configure
===================================================================
RCS file: /cvs/src/src/rda/unix/configure,v
retrieving revision 1.7.2.1
diff -c -r1.7.2.1 configure
*** rda/unix/configure	26 Oct 2004 23:04:44 -0000	1.7.2.1
--- rda/unix/configure	23 Nov 2004 05:52:21 -0000
***************
*** 4443,4449 ****
  
  case "$target" in
    mips64*linux*)
!     TARGET_MODULES="linux-target.o thread-db.o ptrace-target.o"
      cat >>confdefs.h <<\_ACEOF
  #define LINUX_TARGET 1
  _ACEOF
--- 4443,4449 ----
  
  case "$target" in
    mips64*linux*)
!     TARGET_MODULES="linux-target.o thread-db.o lwp-pool.o ptrace-target.o"
      cat >>confdefs.h <<\_ACEOF
  #define LINUX_TARGET 1
  _ACEOF
***************
*** 4478,4484 ****
    arm*linux* | \
    mips*linux* | \
    frv*linux*)
!     TARGET_MODULES="linux-target.o thread-db.o ptrace-target.o"
      cat >>confdefs.h <<\_ACEOF
  #define LINUX_TARGET 1
  _ACEOF
--- 4478,4484 ----
    arm*linux* | \
    mips*linux* | \
    frv*linux*)
!     TARGET_MODULES="linux-target.o thread-db.o lwp-pool.o ptrace-target.o"
      cat >>confdefs.h <<\_ACEOF
  #define LINUX_TARGET 1
  _ACEOF


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2004-12-03  3:33 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2004-11-23  6:16 NPTL work committed to jimb-rda-nptl-branch Jim Blandy
2004-12-02 20:15 ` Daniel Jacobowitz
2004-12-03  0:44   ` Jim Blandy
2004-12-03  0:47     ` Daniel Jacobowitz
2004-12-03  3:33       ` Jim Blandy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).