public inbox for archer@sourceware.org
 help / color / mirror / Atom feed
* Q: ugdb && watchpoints
@ 2010-10-19 18:16 Oleg Nesterov
  2010-10-19 22:04 ` Kevin Buettner
  0 siblings, 1 reply; 2+ messages in thread
From: Oleg Nesterov @ 2010-10-19 18:16 UTC (permalink / raw)
  To: archer

In short: I do not understand what should software watchpoint
actually do to be "correct" (as much as possible without hardware
support) and useful.

Just in case, ugdb implements watchpoints via single-stepping,
like gdb does with can-use-hw-watchpoints=0. The only difference
is that it doesn't report the step to gdb until it notices the
memory was changed. I do not see a more clever method.



Now. Consider a multitraced tracee and a single watchpoint.
Say, we have two threads T1 and T2 and "long VAR".

	(gdb) watch VAR
	(gdb) c -a

each thread does a step and checks if VAR was changed. However,
it is not possible to figure out who changed VAR. I thought that
it would be better if both threads report T05watch to gdb, in this
case a user can look at both and see what the code/insn does.

But this doesn't help. Even if both thread report T05watch
simultaneously, gdb picks a "random" thread to report and "ignores"
all other watch reports (this is because it updates its copy of
VAR after the first notification and always ignores T05watch if
it doesn't see that VAR was changed).

So, what should ugdb do? Looks like, it doesn't make sense to report
more than one T05watch to gdb, ugdb can pick a random thread (say,
the first one who noticed the change) for report with the same effect.
This can simplify the code, but looks very ugly. OTOH, whatever it
does it can't improve things in this respect.



Now suppose we have a single thread and two watchpoints. The problem
is, a single instruction can change both but there is no way to report
this. remote_parse_stop_reply() doesn't expect multiple watchpoints,
and there is the single stop_reply->watch_data_address.

Just think about sys_read(), no matter how many watchpoints we have,
a single syscall insn can change them all.



Finally, a multithread tracee and multiple watchpoints. What should
ugdb do if some thread detects the memory change after the step?
Which watchpoint it should report in stop-reply? What should other
threads do when they notice the change? Again, I do not see anything
better than /dev/urandom to choose the thread/watchpoint pair.



I was even thinking about serializing. That is, ugdb schedules only
one thread to step every time. This way at least we can always know
who changes the memory. But this is non-trivial, very bad from
perfomance pov, and doesn't work with syscalls.


Any advice is very much appreciated. Most probably, there is no any
clever solution. Once a traced sub-thread detects that a watchpoint
was changed, it should mark this wp as "reported" for other threads
and report it to gdb. IOW, we report the random thread and random wp.

Please confirm if this is what we want.

----------------------------------------------------------------------

Another question. I guess, ugdb should implement hardware watchpoints
as well? Otherwise, there is no any improvement compared to gdbserver
in the likely case (at least I think that a-lot-of-wps is not that
common). But we only have Z2 for both. So I assume that ugdb should
try to use the hardware watchpoints, but silently fall back to
emulating?

(Btw. With or without gdbserver, hardware watchpoints do not work if
 the tracee changes the memory in syscall. Perhaps gdb/gdbserver should
 use PTRACE_SYSCALL).

The last (minor) problem, gdb never sends Z2 to ugdb if
default_region_ok_for_hw_watchpoint() thinks the size of variable is
too large.

Oleg.

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Q: ugdb && watchpoints
  2010-10-19 18:16 Q: ugdb && watchpoints Oleg Nesterov
@ 2010-10-19 22:04 ` Kevin Buettner
  0 siblings, 0 replies; 2+ messages in thread
From: Kevin Buettner @ 2010-10-19 22:04 UTC (permalink / raw)
  To: archer

On Tue, 19 Oct 2010 20:11:59 +0200
Oleg Nesterov <oleg@redhat.com> wrote:

> I was even thinking about serializing. That is, ugdb schedules only
> one thread to step every time. This way at least we can always know
> who changes the memory. But this is non-trivial, very bad from
> perfomance pov, and doesn't work with syscalls.

I think that stepping one thread at a time is the approach that must
be taken if you want to accurately report the thread that triggered
the watchpoint.  (I don't understand the issue with syscalls though...)

> Any advice is very much appreciated. Most probably, there is no any
> clever solution. Once a traced sub-thread detects that a watchpoint
> was changed, it should mark this wp as "reported" for other threads
> and report it to gdb. IOW, we report the random thread and random wp.

Is there a big performance win in implementing software watchpoints in
ugdb?  If not, I wouldn't worry about it.  My experience with software
watchpoints in native gdb is that they're *very* slow and, as such,
are often not worth using at all.

> Another question. I guess, ugdb should implement hardware watchpoints
> as well? Otherwise, there is no any improvement compared to gdbserver
> in the likely case (at least I think that a-lot-of-wps is not that
> common). But we only have Z2 for both. So I assume that ugdb should
> try to use the hardware watchpoints, but silently fall back to
> emulating?

IMO, hardware watchpoints are definitely worth implementing.  From
a user perspective, I would prefer that the stub not implement software
watchpoint support when the real hardware watchpoints are used up.

Hope this helps...

Kevin

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2010-10-19 22:04 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-10-19 18:16 Q: ugdb && watchpoints Oleg Nesterov
2010-10-19 22:04 ` Kevin Buettner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).