public inbox for rda@sourceware.org
 help / color / mirror / Atom feed
* [RFC] Improve performance of multi-threaded debugging
@ 2005-09-14 22:04 Kevin Buettner
  2005-09-14 22:09 ` Daniel Jacobowitz
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Kevin Buettner @ 2005-09-14 22:04 UTC (permalink / raw)
  To: rda

As things stand now, the thread list is fetched each time rda checks
the status of the program.  This doesn't sound like such a burden until
you realize that some decent sized data structures are read via the
ptrace() interface.  On slower machines, it can take a significant amount
of time to single step or continue primarily due to this overhead.

The patch below fetches the thread list only when it knows for certain
that something has changed.

The only fly in the ointment is that the signal based event model only
knows about thread creation, but not about thread death.  So it won't
catch thread death until some new thread is created.  I'm not sure
what the implications of this are in practice.

Comments?

	* thread-db.c (ALWAYS_UPDATE_THREAD_LIST): Define to be 0.
	(handle_thread_db_event): Update thread list upon receipt of
	TD_CREATE or TD_DEATH events.
	(thread_db_check_child_state): Potentially disable, depending upon
	value of ALWAYS_UPDATE_THREAD_LIST, the thread list update.
	(thread_db_check_child_state): Update thread list for signal based
	event model too.

Index: thread-db.c
===================================================================
RCS file: /cvs/cvsfiles/devo/rda/unix/thread-db.c,v
retrieving revision 1.25.2.1
diff -u -p -r1.25.2.1 thread-db.c
--- thread-db.c	3 Aug 2005 05:20:58 -0000	1.25.2.1
+++ thread-db.c	14 Sep 2005 21:31:48 -0000
@@ -48,6 +48,8 @@
 int thread_db_noisy = 0;
 int proc_service_noisy = 0;
 
+#define ALWAYS_UPDATE_THREAD_LIST 0
+
 /*
  * A tiny local symbol table.
  *
@@ -1779,6 +1781,7 @@ handle_thread_db_event (struct child_pro
   struct gdbserv_thread *thread = process->event_thread;
   lwpid_t lwp;
   union wait w;
+  int do_update = 0;
 
   /* We need to be actually using the event interface.  */
   if (! using_thread_db_events)
@@ -1812,13 +1815,16 @@ handle_thread_db_event (struct child_pro
 	  break;
 	}
 
-      /* The only messages we're concerned with are TD_CREATE and
-	 TD_DEATH.
+      if (msg.event == TD_CREATE || msg.event == TD_DEATH)
+	do_update = 1;
+    }
 
-	 Every time thread_db_check_child_state gets a wait status
-	 from waitpid, we call update_thread_list, so our list is
-	 always up to date; we don't actually need to do anything with
-	 these messages for our own sake.  */
+  if (do_update)
+    {
+#if !ALWAYS_UPDATE_THREAD_LIST
+      /* Update the thread list.  */
+      update_thread_list (process);
+#endif
     }
 
   /* Disable the event breakpoints while we step the thread across them.  */
@@ -2122,9 +2128,11 @@ thread_db_check_child_state (struct chil
 		     process->stop_signal,
 		     (unsigned long) debug_get_pc (process->serv, process->pid));
 
+#if ALWAYS_UPDATE_THREAD_LIST
 	  /* Update the thread list, and attach to (and thereby stop)
              any new threads we find.  */
 	  update_thread_list (process);
+#endif
 
 	  process->event_thread = thread_list_lookup_by_lid (process->pid);
 
@@ -2170,6 +2178,12 @@ thread_db_check_child_state (struct chil
 		    process->stop_signal = restart_signal;
 		  else				/* not main thread */
 		    process->stop_signal = 0;
+
+#if !ALWAYS_UPDATE_THREAD_LIST
+		  /* Update the thread list.  */
+		  update_thread_list (process);
+#endif
+
 		}
 	      process->signal_to_send = process->stop_signal;
 	      currentvec->continue_program (serv);

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] Improve performance of multi-threaded debugging
  2005-09-14 22:04 [RFC] Improve performance of multi-threaded debugging Kevin Buettner
@ 2005-09-14 22:09 ` Daniel Jacobowitz
  2005-09-14 22:31   ` Kevin Buettner
  2005-09-16 20:13 ` Jim Blandy
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 8+ messages in thread
From: Daniel Jacobowitz @ 2005-09-14 22:09 UTC (permalink / raw)
  To: rda

On Wed, Sep 14, 2005 at 03:04:39PM -0700, Kevin Buettner wrote:
> As things stand now, the thread list is fetched each time rda checks
> the status of the program.  This doesn't sound like such a burden until
> you realize that some decent sized data structures are read via the
> ptrace() interface.  On slower machines, it can take a significant amount
> of time to single step or continue primarily due to this overhead.
> 
> The patch below fetches the thread list only when it knows for certain
> that something has changed.
> 
> The only fly in the ointment is that the signal based event model only
> knows about thread creation, but not about thread death.  So it won't
> catch thread death until some new thread is created.  I'm not sure
> what the implications of this are in practice.

You may run a risk of not noticing when a thread is destroyed
and then a new thread is created with the same thread ID - which NPTL
does all the time.

Other than that I haven't looked at rda enough to comment.  I'm not
sure what you mean about not knowing about thread death; you ought to
be notified about thread death (A) via TD_* and (B) via wait, except on
some broken RH 2.4 kernels.

-- 
Daniel Jacobowitz
CodeSourcery, LLC

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] Improve performance of multi-threaded debugging
  2005-09-14 22:09 ` Daniel Jacobowitz
@ 2005-09-14 22:31   ` Kevin Buettner
  0 siblings, 0 replies; 8+ messages in thread
From: Kevin Buettner @ 2005-09-14 22:31 UTC (permalink / raw)
  To: rda

On Wed, 14 Sep 2005 18:08:51 -0400
Daniel Jacobowitz <drow@false.org> wrote:

> On Wed, Sep 14, 2005 at 03:04:39PM -0700, Kevin Buettner wrote:
> > As things stand now, the thread list is fetched each time rda checks
> > the status of the program.  This doesn't sound like such a burden until
> > you realize that some decent sized data structures are read via the
> > ptrace() interface.  On slower machines, it can take a significant amount
> > of time to single step or continue primarily due to this overhead.
> > 
> > The patch below fetches the thread list only when it knows for certain
> > that something has changed.
> > 
> > The only fly in the ointment is that the signal based event model only
> > knows about thread creation, but not about thread death.  So it won't
> > catch thread death until some new thread is created.  I'm not sure
> > what the implications of this are in practice.
> 
> You may run a risk of not noticing when a thread is destroyed
> and then a new thread is created with the same thread ID - which NPTL
> does all the time.
> 
> Other than that I haven't looked at rda enough to comment.  I'm not
> sure what you mean about not knowing about thread death; you ought to
> be notified about thread death (A) via TD_* and (B) via wait, except on
> some broken RH 2.4 kernels.

I have the TD_ case taken care of.  My patch didn't address the wait() case
though.

Thanks for your comments!

Kevin

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] Improve performance of multi-threaded debugging
  2005-09-14 22:04 [RFC] Improve performance of multi-threaded debugging Kevin Buettner
  2005-09-14 22:09 ` Daniel Jacobowitz
@ 2005-09-16 20:13 ` Jim Blandy
  2005-09-16 20:20 ` Jim Blandy
  2005-11-04 21:16 ` Kevin Buettner
  3 siblings, 0 replies; 8+ messages in thread
From: Jim Blandy @ 2005-09-16 20:13 UTC (permalink / raw)
  To: Kevin Buettner; +Cc: rda


Kevin Buettner <kevinb@redhat.com> writes:
> As things stand now, the thread list is fetched each time rda checks
> the status of the program.  This doesn't sound like such a burden until
> you realize that some decent sized data structures are read via the
> ptrace() interface.  On slower machines, it can take a significant amount
> of time to single step or continue primarily due to this overhead.
>
> The patch below fetches the thread list only when it knows for certain
> that something has changed.
>
> The only fly in the ointment is that the signal based event model only
> knows about thread creation, but not about thread death.  So it won't
> catch thread death until some new thread is created.  I'm not sure
> what the implications of this are in practice.
>
> Comments?

Couldn't this be done dynamically?  That is, there are variables in
there that indicate whether we're using the signal-based model,
wherein I think we really do have to re-fetch the thread list all the
time, or the libthread_db-event-based model; wherein we don't.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] Improve performance of multi-threaded debugging
  2005-09-14 22:04 [RFC] Improve performance of multi-threaded debugging Kevin Buettner
  2005-09-14 22:09 ` Daniel Jacobowitz
  2005-09-16 20:13 ` Jim Blandy
@ 2005-09-16 20:20 ` Jim Blandy
  2005-09-16 20:49   ` Kevin Buettner
  2005-11-04 21:16 ` Kevin Buettner
  3 siblings, 1 reply; 8+ messages in thread
From: Jim Blandy @ 2005-09-16 20:20 UTC (permalink / raw)
  To: Kevin Buettner; +Cc: rda


Kevin Buettner <kevinb@redhat.com> writes:
> The only fly in the ointment is that the signal based event model only
> knows about thread creation, but not about thread death.  So it won't
> catch thread death until some new thread is created.  I'm not sure
> what the implications of this are in practice.

Even in the signal-based model, we're attached to all the threads, so
we should get a wait status via lwp_pool_waitpid.  Isn't that working?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] Improve performance of multi-threaded debugging
  2005-09-16 20:20 ` Jim Blandy
@ 2005-09-16 20:49   ` Kevin Buettner
  2005-09-17  0:19     ` Jim Blandy
  0 siblings, 1 reply; 8+ messages in thread
From: Kevin Buettner @ 2005-09-16 20:49 UTC (permalink / raw)
  To: rda

On Fri, 16 Sep 2005 13:19:57 -0700
Jim Blandy <jimb@redhat.com> wrote:

> 
> Kevin Buettner <kevinb@redhat.com> writes:
> > The only fly in the ointment is that the signal based event model only
> > knows about thread creation, but not about thread death.  So it won't
> > catch thread death until some new thread is created.  I'm not sure
> > what the implications of this are in practice.
> 
> Even in the signal-based model, we're attached to all the threads, so
> we should get a wait status via lwp_pool_waitpid.  Isn't that working?

It probably does, but do not know for certain.

I saw that the thread list was getting fetched on every status check.  My
assumption was that there was some good reason for this to occur and revised
the code so that it would only happen when so informed by the event model.
What I did not investigate is whether the thread refetch ever really needs
to happen at all.

Kevin

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] Improve performance of multi-threaded debugging
  2005-09-16 20:49   ` Kevin Buettner
@ 2005-09-17  0:19     ` Jim Blandy
  0 siblings, 0 replies; 8+ messages in thread
From: Jim Blandy @ 2005-09-17  0:19 UTC (permalink / raw)
  To: Kevin Buettner; +Cc: rda


Kevin Buettner <kevinb@redhat.com> writes:
> On Fri, 16 Sep 2005 13:19:57 -0700
> Jim Blandy <jimb@redhat.com> wrote:
>
>> 
>> Kevin Buettner <kevinb@redhat.com> writes:
>> > The only fly in the ointment is that the signal based event model only
>> > knows about thread creation, but not about thread death.  So it won't
>> > catch thread death until some new thread is created.  I'm not sure
>> > what the implications of this are in practice.
>> 
>> Even in the signal-based model, we're attached to all the threads, so
>> we should get a wait status via lwp_pool_waitpid.  Isn't that working?
>
> It probably does, but do not know for certain.
>
> I saw that the thread list was getting fetched on every status check.  My
> assumption was that there was some good reason for this to occur and revised
> the code so that it would only happen when so informed by the event model.
> What I did not investigate is whether the thread refetch ever really needs
> to happen at all.

Well, that's a bigger question than the one I was thinking about.  You
wrote that you were concerned about missing thread death events, and I
was saying that I thought that wouldn't happen.

It would be nice to look at the larger question of whether we need to
re-transfer the list at all, or whether we could just let TD_CREATE
and TD_DEATH maintain it.  Something makes me a little uncomfortable
with an "edge-sensitive" model over a "level-sensitive" model, but I
can't see anything wrong with it.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC] Improve performance of multi-threaded debugging
  2005-09-14 22:04 [RFC] Improve performance of multi-threaded debugging Kevin Buettner
                   ` (2 preceding siblings ...)
  2005-09-16 20:20 ` Jim Blandy
@ 2005-11-04 21:16 ` Kevin Buettner
  3 siblings, 0 replies; 8+ messages in thread
From: Kevin Buettner @ 2005-11-04 21:16 UTC (permalink / raw)
  To: rda

On Wed, 14 Sep 2005 15:04:39 -0700
Kevin Buettner <kevinb@redhat.com> wrote:

> 	* thread-db.c (ALWAYS_UPDATE_THREAD_LIST): Define to be 0.
> 	(handle_thread_db_event): Update thread list upon receipt of
> 	TD_CREATE or TD_DEATH events.
> 	(thread_db_check_child_state): Potentially disable, depending upon
> 	value of ALWAYS_UPDATE_THREAD_LIST, the thread list update.
> 	(thread_db_check_child_state): Update thread list for signal based
> 	event model too.

While preparing a patch for some recent work that I've done on RDA, I
noticed that I had not yet committed the above changes.   They're in now.

Kevin

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2005-11-04 21:16 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2005-09-14 22:04 [RFC] Improve performance of multi-threaded debugging Kevin Buettner
2005-09-14 22:09 ` Daniel Jacobowitz
2005-09-14 22:31   ` Kevin Buettner
2005-09-16 20:13 ` Jim Blandy
2005-09-16 20:20 ` Jim Blandy
2005-09-16 20:49   ` Kevin Buettner
2005-09-17  0:19     ` Jim Blandy
2005-11-04 21:16 ` Kevin Buettner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).