public inbox for archer@sourceware.org
 help / color / mirror / Atom feed
* froggy/archer -- 2009-02-24
@ 2009-02-24 16:01 Chris Moller
  2009-02-24 16:09 ` Daniel Jacobowitz
  0 siblings, 1 reply; 12+ messages in thread
From: Chris Moller @ 2009-02-24 16:01 UTC (permalink / raw)
  To: Project Archer

[-- Attachment #1: Type: text/plain, Size: 2236 bytes --]

Well, there's good news and there's bad news.

The good news is that the re-write I've been doing to more efficiently
incorporate froggy into gdb is done.

The bad news is that I've hit a brick wall.

Here's the deal:

utrace provides two classes of capabilities.  One is a subset of the
standard ptrace() stuff attaching/detaching processes, stopping and
continuing them, and examining/setting registers.  The other is a
waitpid()-like capability of reacting to various events in attached
processes--life-cycle changes like clone, exit and death; the arrival of
signals; and entry into and exit from syscalls.  All of that is
asynchronous and froggy deals with the asynchronicity by being
explicitly dual-threaded, one thread for the ptrace()-like capabilities,
the other spinning on a blocking waitpid()-like thing and reporting
stuff to the application via callbacks.

The problem is that there's no way to fit that dual-thread/callback
paradigm into existing gdb--I've been tearing my hair out for a couple
of weeks trying to make it happen.  The result of all this is that while
I can replace ptrace() calls with froggy calls, I'm stuck with the
existing waitpid() stuff in gdb, and that's pretty much useless.  Under
the covers, ptrace() is already utrace-based--part of Roland's utrace
kernel patch does that.  So, without being able to use froggy's
event-catching capability, all I'm left with is the same capabilities as
utrace-under-ptrace, but a lot less efficiently--the ptrace() syscall
vs. froggy's /sys/kernel/debug pseudo-file i/o.

(All this, by the way, is exactly the problem I ran into in frysk and
neither Andrew Cagney or I could make it work.  I was kinda hoping that,
since gdb is better structured than frysk was, the problem would be less
intractable.  It wasn't.)

I'm going to back off and think about this some more--maybe something
will occur to me--but meanwhile I'll return to what I was doing a couple
of months ago, stuffing IBM's user-breakpoint code, ubp, into froggy.

-- 
Chris Moller

  I know that you believe you understand what you think I said, but
  I'm not sure you realize that what you heard is not what I meant.
      -- Robert McCloskey



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 251 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: froggy/archer -- 2009-02-24
  2009-02-24 16:01 froggy/archer -- 2009-02-24 Chris Moller
@ 2009-02-24 16:09 ` Daniel Jacobowitz
  2009-02-24 16:24   ` Tom Tromey
  2009-02-24 16:32   ` Chris Moller
  0 siblings, 2 replies; 12+ messages in thread
From: Daniel Jacobowitz @ 2009-02-24 16:09 UTC (permalink / raw)
  To: Chris Moller; +Cc: Project Archer

On Tue, Feb 24, 2009 at 11:01:08AM -0500, Chris Moller wrote:
> The problem is that there's no way to fit that dual-thread/callback
> paradigm into existing gdb--I've been tearing my hair out for a couple
> of weeks trying to make it happen.

Could you expand on this a little?  It does not sound all that
different from the signal-based mechanism that GDB uses for async and
non-stop debug (I'm assuming you're looking at HEAD).

-- 
Daniel Jacobowitz
CodeSourcery

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: froggy/archer -- 2009-02-24
  2009-02-24 16:09 ` Daniel Jacobowitz
@ 2009-02-24 16:24   ` Tom Tromey
  2009-02-24 22:14     ` Chris Moller
  2009-02-25 11:05     ` Chris Moller
  2009-02-24 16:32   ` Chris Moller
  1 sibling, 2 replies; 12+ messages in thread
From: Tom Tromey @ 2009-02-24 16:24 UTC (permalink / raw)
  To: Daniel Jacobowitz; +Cc: Chris Moller, Project Archer

>>>>> "Daniel" == Daniel Jacobowitz <drow@false.org> writes:

Daniel> On Tue, Feb 24, 2009 at 11:01:08AM -0500, Chris Moller wrote:
>> The problem is that there's no way to fit that dual-thread/callback
>> paradigm into existing gdb--I've been tearing my hair out for a couple
>> of weeks trying to make it happen.

Daniel> Could you expand on this a little?  It does not sound all that
Daniel> different from the signal-based mechanism that GDB uses for async and
Daniel> non-stop debug (I'm assuming you're looking at HEAD).

Additionally, Chris, could you push what you have?
It doesn't have to be pretty, or work, or even compile.

This would be helpful because interested people could make a diff and
take a look -- it would help make the discussion a bit more specific.

thanks,
Tom

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: froggy/archer -- 2009-02-24
  2009-02-24 16:09 ` Daniel Jacobowitz
  2009-02-24 16:24   ` Tom Tromey
@ 2009-02-24 16:32   ` Chris Moller
  2009-02-24 16:46     ` Daniel Jacobowitz
  1 sibling, 1 reply; 12+ messages in thread
From: Chris Moller @ 2009-02-24 16:32 UTC (permalink / raw)
  To: archer

[-- Attachment #1: Type: text/plain, Size: 1332 bytes --]



Daniel Jacobowitz wrote:
> On Tue, Feb 24, 2009 at 11:01:08AM -0500, Chris Moller wrote:
>   
>> The problem is that there's no way to fit that dual-thread/callback
>> paradigm into existing gdb--I've been tearing my hair out for a couple
>> of weeks trying to make it happen.
>>     
>
> Could you expand on this a little?  It does not sound all that
> different from the signal-based mechanism that GDB uses for async and
> non-stop debug (I'm assuming you're looking at HEAD).
>   

The problem is that there are waitpid() instances in a lot of places,
all of them blocking, and then dealing with whatever happens to break
the block.  The architecture of froggy is such that it expects to be the
sole handler of inferior-process events that break waitpid() blocks on,
and do so on a thread dedicated to that.  So far as I can tell, the only
way to make this work in existing gdb is to implement some other kind of
blocking in the gdb waitpid()s and have froggy callbacks tickle them as
necessary.  This looks like it would not only be hard to do, but
wouldn't really add any capability to what's there already.

-- 
Chris Moller

  I know that you believe you understand what you think I said, but
  I'm not sure you realize that what you heard is not what I meant.
      -- Robert McCloskey



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 251 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: froggy/archer -- 2009-02-24
  2009-02-24 16:32   ` Chris Moller
@ 2009-02-24 16:46     ` Daniel Jacobowitz
  2009-02-25  4:20       ` Chris Moller
  0 siblings, 1 reply; 12+ messages in thread
From: Daniel Jacobowitz @ 2009-02-24 16:46 UTC (permalink / raw)
  To: archer

On Tue, Feb 24, 2009 at 11:32:37AM -0500, Chris Moller wrote:
> The problem is that there are waitpid() instances in a lot of places,
> all of them blocking, and then dealing with whatever happens to break
> the block.  The architecture of froggy is such that it expects to be the
> sole handler of inferior-process events that break waitpid() blocks on,
> and do so on a thread dedicated to that.  So far as I can tell, the only
> way to make this work in existing gdb is to implement some other kind of
> blocking in the gdb waitpid()s and have froggy callbacks tickle them as
> necessary.  This looks like it would not only be hard to do, but
> wouldn't really add any capability to what's there already.

I assume we're talking primarily about linux-nat.c.

The places which call waitpid (or my_waitpid) are:

  * get_pending_events, which is using it to collect all events that
    have happened asynchronously - using WNOHANG.

  * linux_test_for_tracefork, which is just used at startup to
    investigate capabilities of the host kernel.

  * linux_child_follow_fork.  This one does have to block, it's waiting
    for the parent to stop as vfork returns.

  * linux_nat_post_attach_wait, which is just trying to quiesce after
    attach.

  * linux_handle_extended_wait.  This is another two-processes case;
    we are waiting for the child to quiesce because we can not handle
    the fork event reported by the parent until this happens.

  * kill_wait_callback.  Another ptrace wart; we're just waiting for
    killed processes to go away.  If we got async notification of
    that, we could easily sleep here; the order doesn't matter.

  * wait_lwp is also only used for quiescing, after stopping a thread.

And of course linux_nat_wait.  This is the only really interesting
one; notice that in async mode, it never calls waitpid, just checks
the asynchronous queue.

Of course, I don't know what you're trying to achieve with froggy
here.  But it sounds like it's doing basically the same thing as
the queued_waitpid / linux_nat_event_pipe_* mechanism; that is a
layer which transforms waitpid results into an async stream.

-- 
Daniel Jacobowitz
CodeSourcery

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: froggy/archer -- 2009-02-24
  2009-02-24 16:24   ` Tom Tromey
@ 2009-02-24 22:14     ` Chris Moller
  2009-02-25  0:59       ` Tom Tromey
  2009-02-25 11:05     ` Chris Moller
  1 sibling, 1 reply; 12+ messages in thread
From: Chris Moller @ 2009-02-24 22:14 UTC (permalink / raw)
  To: archer

[-- Attachment #1: Type: text/plain, Size: 1075 bytes --]



Tom Tromey wrote:
>
> Additionally, Chris, could you push what you have?

It looks like once you've cloned with git://sources..., you can't just
override that with a push to ssh://sources...  I'm pulling down a fresh
copy of things with ssh; I'll stuff in my deltas and push stuff tomorrow.

> It doesn't have to be pretty, 

That's good...

> or work, 

It kinda does, for the ptrace()/control bits.

> or even compile.
>   

Only if you have froggy installed--you need the includes and
libfroggy.so even to build, plus the module froggy.ko if you want to try
to run .  if anyone wants to try building it, configure with

./configure --with-froggy=<wherever-you-put-froggy>

mine, e.g., is

./configure --with-froggy=/usr/sandbox/froggy/froggy

where /usr/sandbox/froggy/froggy is the dir with the docs, froggy, lib,
module, and test, dirs in it.

-- 
Chris Moller

  I know that you believe you understand what you think I said, but
  I'm not sure you realize that what you heard is not what I meant.
      -- Robert McCloskey



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 251 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: froggy/archer -- 2009-02-24
  2009-02-24 22:14     ` Chris Moller
@ 2009-02-25  0:59       ` Tom Tromey
  0 siblings, 0 replies; 12+ messages in thread
From: Tom Tromey @ 2009-02-25  0:59 UTC (permalink / raw)
  To: Chris Moller; +Cc: archer

Chris> It looks like once you've cloned with git://sources..., you can't just
Chris> override that with a push to ssh://sources...

For future reference, you can edit .git/config and change the access
method.

Tom

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: froggy/archer -- 2009-02-24
  2009-02-24 16:46     ` Daniel Jacobowitz
@ 2009-02-25  4:20       ` Chris Moller
  2009-02-25 14:32         ` Jan Kratochvil
  2009-02-25 15:11         ` Daniel Jacobowitz
  0 siblings, 2 replies; 12+ messages in thread
From: Chris Moller @ 2009-02-25  4:20 UTC (permalink / raw)
  To: Daniel Jacobowitz; +Cc: Project Archer

[-- Attachment #1: Type: text/plain, Size: 4485 bytes --]



Daniel Jacobowitz wrote:
>
> I assume we're talking primarily about linux-nat.c.
>
> The places which call waitpid (or my_waitpid) are:
>   

First, a bit more of how froggy works.   Froggy has two components, a
user-space library and a kernel module. 

A big part of utrace are what are called report_* callbacks--hooks that
if they're enabled get called when various things occur--and a lot of
the froggy module deals with those hooks.

The library communicates with the the module via ioctl()s and blocking
read()s on /sys/kernel/debug/froggy.  The ioctl()s are the mechanism
used for ptrace()-like operations; the read()s are how events are
reported.  When froggy is initialised, it spawns a thread that loops on
a read().  In the module, that read is blocked on a wait.  When an event
of interest occurs, it queues a packet, wakes up the thread, letting the
read() return.  In the froggy lib, the returned packet is parsed and any
appropriate user-space callbacks, more or less corresponding to the
kernel/utrace report_* callbacks, are called.

The key thing here is that in froggy there's exactly one waitpid()-like
thing--that blocking read().  In gdb, there are multiple waipid()s and
different things happen after each of the different waitpid()s--in
effect, each waipid() occurs in it's own context.  Due to the
centralised nature of the froggy event reporting, that context is lost,
so there's not likely just one appropriate user-space callback that will
work.  Further, as noted below, the whole point of some waitpid()s to
block for various reasons.

Regarding the specific instances:

>   * get_pending_events, which is using it to collect all events that
>     have happened asynchronously - using WNOHANG.
>   

This one actually maps to froggy fairly well, just forwarding events to
linux_nat_event_pipe_push().   The problem--and I don't know for sure if
it really is one--is that since in froggy the events are reported
through the froggy response thread, linux_nat_event_pipe_push() will be
called asynchronously, which I don't have any idea if that does what's
needed.


>   * linux_test_for_tracefork, which is just used at startup to
>     investigate capabilities of the host kernel.
>   

This is irrelevant--by definition, if froggy is being used, the
capabilities of the kernel are known.

>   * linux_child_follow_fork.  This one does have to block, it's waiting
>     for the parent to stop as vfork returns.
>   

The whole purpose of this use is to block the thread--see above.

>   * linux_nat_post_attach_wait, which is just trying to quiesce after
>     attach.
>   

Attaching processes works very differently in froggy/utrace, so I'm not
sure this is relevant.

>   * linux_handle_extended_wait.  This is another two-processes case;
>     we are waiting for the child to quiesce because we can not handle
>     the fork event reported by the parent until this happens.
>   

Again, this kind of thing is (it looks like) handled internally in froggy.

>   * kill_wait_callback.  Another ptrace wart; we're just waiting for
>     killed processes to go away.  If we got async notification of
>     that, we could easily sleep here; the order doesn't matter.
>   

Again, killing in froggy will probably have an option flag to block
until the killed process is really, truly, dead, or partially dead, or
whatever the user wants.

>   * wait_lwp is also only used for quiescing, after stopping a thread.
>   

froggy_quiesce_pid() optionally blocks until the thread quiesces.

> And of course linux_nat_wait.  This is the only really interesting
> one; notice that in async mode, it never calls waitpid, just checks
> the asynchronous queue.
>   

This looks like another one of those context-dependent things.

> Of course, I don't know what you're trying to achieve with froggy
> here.  But it sounds like it's doing basically the same thing as
> the queued_waitpid / linux_nat_event_pipe_* mechanism; that is a
> layer which transforms waitpid results into an async stream.
>   

That's a big part of it, but the block-for-whatever-reason stuff has to
work too and that's mostly the part I can't figure out how to do--that
and the context thing.


-- 
Chris Moller

  I know that you believe you understand what you think I said, but
  I'm not sure you realize that what you heard is not what I meant.
      -- Robert McCloskey



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 251 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: froggy/archer -- 2009-02-24
  2009-02-24 16:24   ` Tom Tromey
  2009-02-24 22:14     ` Chris Moller
@ 2009-02-25 11:05     ` Chris Moller
  2009-02-25 16:30       ` Chris Moller
  1 sibling, 1 reply; 12+ messages in thread
From: Chris Moller @ 2009-02-25 11:05 UTC (permalink / raw)
  To: archer

[-- Attachment #1: Type: text/plain, Size: 685 bytes --]



Tom Tromey wrote:
>
> Additionally, Chris, could you push what you have?

Three of the four files I used as the basis for my hacking have changed
since I started my hacking: inf-ptrace-froggy.c, i386-linux-nat.c, and
linux-nat.c  I'll merge my stuff in and try to push later today--right
now I'm taking my kids off to their crack-of-dawn skating practice. 
(Someday I'll figure out why figure-skating coaches all seem to like to
do their thing before the damn birds get up...)


-- 
Chris Moller

  I know that you believe you understand what you think I said, but
  I'm not sure you realize that what you heard is not what I meant.
      -- Robert McCloskey



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 251 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: froggy/archer -- 2009-02-24
  2009-02-25  4:20       ` Chris Moller
@ 2009-02-25 14:32         ` Jan Kratochvil
  2009-02-25 15:11         ` Daniel Jacobowitz
  1 sibling, 0 replies; 12+ messages in thread
From: Jan Kratochvil @ 2009-02-25 14:32 UTC (permalink / raw)
  To: Chris Moller; +Cc: Daniel Jacobowitz, Project Archer

Hi,

sorry for talking too abstract without the GDB froggy patch as you may be
aware of some of the parts:


On Wed, 25 Feb 2009 05:20:21 +0100, Chris Moller wrote:
> First, a bit more of how froggy works.   Froggy has two components, a
> user-space library and a kernel module. 

There are two programming models in use:
(a) Event driven - everything is non-blocking, the only blocking is a central
    poll().  Program is singlethreaded.  Used by GTK/glib g_main_loop().
(b) Thread driven - everything is blocking, each event creates its own thread
    which waits till the event happens.  Used for example by the Java model.

Kernel froggy is (a).  Userland froggy makes AFAIK from (a) the (b) model.
GDB non-stop mode (*) is (a).  Trying to convert (a) to (b) by userland froggy
and then to try to convert (b) back again to GDB's (a) is too complicated.

(*) Type first: (gdb) set non-stop on, (gdb) set target-async on
    Available in FSF GDB HEAD and in Fedora Rawhide.
    _Not_ available in FSF GDB release gdb-6.8 or in Fedora 10.


> The library communicates with the the module via ioctl()s and blocking
> read()s on /sys/kernel/debug/froggy.

You should be interested in checking gdb_wait_for_event() which is the poll()
loop of GDB and insert the "/sys/kernel/debug/froggy" fd into the events
gdb_wait_for_event() waits for - the `gdb_notifier' fd set.


> A big part of utrace are what are called report_* callbacks--hooks that
> if they're enabled get called when various things occur

I find it is easier to patch GDB by using its existing event infrastructure
than trying to replace it by a new external one from the froggy library.


> The key thing here is that in froggy there's exactly one waitpid()-like
> thing--that blocking read().  In gdb, there are multiple waipid()s and

There is only one main poll() in gdb_wait_for_event().  Unfortunately there
are even several other scattered waitpid()s but these are just a relict for
attach etc. operations which should be mostly no longer needed for utrace.


> >   * get_pending_events, which is using it to collect all events that
> >     have happened asynchronously - using WNOHANG.
> 
> This one actually maps to froggy fairly well, just forwarding events to
> linux_nat_event_pipe_push().   The problem--and I don't know for sure if
> it really is one--is that since in froggy the events are reported
> through the froggy response thread, linux_nat_event_pipe_push() will be
> called asynchronously, which I don't have any idea if that does what's
> needed.

IMO there should not be get_pending_events needed.  It only does the
conversion from waitpid() results into the socket data input (by
linux_nat_event_pipe_push).  As in the kernel froggy case you have the luxury
of data data coming in already by socket (and not by waitpid(2))
get_pending_events is obsolete for utrace/froggy.

There should be no new froggy thread as GDB is already prepared to always end
in poll() (==gdb_wait_for_event()) itself.


> >   * linux_child_follow_fork.  This one does have to block, it's waiting
> >     for the parent to stop as vfork returns.

Neither `(gdb) set follow-fork-mode' nor `(gdb) set detach-on-fork' support is
not on the critical path for the basic GDB port to utrace/froggy.


> >   * linux_nat_post_attach_wait, which is just trying to quiesce after
> >     attach.
> >   
> 
> Attaching processes works very differently in froggy/utrace, so I'm not
> sure this is relevant.

Yes, this function is gratefully gone for froggy/utrace.


> >   * linux_handle_extended_wait.  This is another two-processes case;
> >     we are waiting for the child to quiesce because we can not handle
> >     the fork event reported by the parent until this happens.

Neither of multithreading/forking/execing support is on the critical path for
the basic GDB port to utrace/froggy.


> >   * kill_wait_callback.  Another ptrace wart; we're just waiting for
> >     killed processes to go away.  If we got async notification of
> >     that, we could easily sleep here; the order doesn't matter.
> >   
> 
> Again, killing in froggy will probably have an option flag to block
> until the killed process is really, truly, dead, or partially dead, or
> whatever the user wants.

This GDB code (linux_nat_kill) still does not have implemented the GDB async
mode which could be probably done for the utrace/froggy port.


Regards,
Jan

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: froggy/archer -- 2009-02-24
  2009-02-25  4:20       ` Chris Moller
  2009-02-25 14:32         ` Jan Kratochvil
@ 2009-02-25 15:11         ` Daniel Jacobowitz
  1 sibling, 0 replies; 12+ messages in thread
From: Daniel Jacobowitz @ 2009-02-25 15:11 UTC (permalink / raw)
  To: Chris Moller; +Cc: Project Archer

On Tue, Feb 24, 2009 at 11:20:21PM -0500, Chris Moller wrote:
> The library communicates with the the module via ioctl()s and blocking
> read()s on /sys/kernel/debug/froggy.  The ioctl()s are the mechanism
> used for ptrace()-like operations; the read()s are how events are
> reported.  When froggy is initialised, it spawns a thread that loops on
> a read().  In the module, that read is blocked on a wait.  When an event
> of interest occurs, it queues a packet, wakes up the thread, letting the
> read() return.  In the froggy lib, the returned packet is parsed and any
> appropriate user-space callbacks, more or less corresponding to the
> kernel/utrace report_* callbacks, are called.

Ugh.  I can tell you you'll have a hard time getting traction with GDB
upstream for this - we've gone to a lot of trouble to avoid a pthreads
dependency.  Am I missing something special about froggy, or could
this just as easily be handled with select and non-blocking reads?

> The key thing here is that in froggy there's exactly one waitpid()-like
> thing--that blocking read().  In gdb, there are multiple waipid()s and
> different things happen after each of the different waitpid()s--in
> effect, each waipid() occurs in it's own context.  Due to the
> centralised nature of the froggy event reporting, that context is lost,
> so there's not likely just one appropriate user-space callback that will
> work.  Further, as noted below, the whole point of some waitpid()s to
> block for various reasons.

Sounds like there are two options for each specific instance.  Make it
asynchronous, or else wait for the specific event you need to arrive
while letting other events build up in the queue - a mini event loop.

We've talked about fully asynchronous event loop behavior in GDB
before.  In many cases, we simply can't - for instance, expression
evaluation.  It'd take a month to rewrite the expression parser to
behave asynchronously for "print foo() + bar()".

> Regarding the specific instances:
> 
> >   * get_pending_events, which is using it to collect all events that
> >     have happened asynchronously - using WNOHANG.
> >   
> 
> This one actually maps to froggy fairly well, just forwarding events to
> linux_nat_event_pipe_push().   The problem--and I don't know for sure if
> it really is one--is that since in froggy the events are reported
> through the froggy response thread, linux_nat_event_pipe_push() will be
> called asynchronously, which I don't have any idea if that does what's
> needed.

It's currently called from a signal handler, so nominally asynchronous.


-- 
Daniel Jacobowitz
CodeSourcery

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: froggy/archer -- 2009-02-24
  2009-02-25 11:05     ` Chris Moller
@ 2009-02-25 16:30       ` Chris Moller
  0 siblings, 0 replies; 12+ messages in thread
From: Chris Moller @ 2009-02-25 16:30 UTC (permalink / raw)
  To: archer

[-- Attachment #1: Type: text/plain, Size: 868 bytes --]



Chris Moller wrote:
> Tom Tromey wrote:
>   
>> Additionally, Chris, could you push what you have?
>>     
>
> Three of the four files I used as the basis for my hacking have changed
> since I started my hacking: inf-ptrace-froggy.c, i386-linux-nat.c, and
> linux-nat.c  I'll merge my stuff in and try to push later today--right
> now I'm taking my kids off to their crack-of-dawn skating practice. 
> (Someday I'll figure out why figure-skating coaches all seem to like to
> do their thing before the damn birds get up...)
>   

Okay, archer-moller-froggy-v2 pushed.  I just cloned a fresh copy and
checked out the branch--it seems to be building okay.


-- 
Chris Moller

  I know that you believe you understand what you think I said, but
  I'm not sure you realize that what you heard is not what I meant.
      -- Robert McCloskey



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 251 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2009-02-25 16:30 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-02-24 16:01 froggy/archer -- 2009-02-24 Chris Moller
2009-02-24 16:09 ` Daniel Jacobowitz
2009-02-24 16:24   ` Tom Tromey
2009-02-24 22:14     ` Chris Moller
2009-02-25  0:59       ` Tom Tromey
2009-02-25 11:05     ` Chris Moller
2009-02-25 16:30       ` Chris Moller
2009-02-24 16:32   ` Chris Moller
2009-02-24 16:46     ` Daniel Jacobowitz
2009-02-25  4:20       ` Chris Moller
2009-02-25 14:32         ` Jan Kratochvil
2009-02-25 15:11         ` Daniel Jacobowitz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).