public inbox for gdb-patches@sourceware.org
 help / color / mirror / Atom feed
From: Pedro Alves <palves@redhat.com>
To: Doug Evans <dje@google.com>
Cc: "Abid, Hafiz" <Hafiz_Abid@mentor.com>,
	       "gdb-patches@sourceware.org" <gdb-patches@sourceware.org>,
	       "Mirza, Taimoor" <Taimoor_Mirza@mentor.com>
Subject: Re: [patch] Disassembly improvements
Date: Wed, 16 Oct 2013 12:02:00 -0000	[thread overview]
Message-ID: <525E8033.7060204@redhat.com> (raw)
In-Reply-To: <21085.59640.697075.435874@ruffy.mtv.corp.google.com>

On 10/16/2013 02:16 AM, Doug Evans wrote:
> Pedro Alves writes:
>  > On 10/11/2013 10:34 PM, Doug Evans wrote:
>  > 
>  > > This is a specific fix to a general problem.
>  > 
>  > I don't know that this is a general problem.
> 
> The general problem I'm referring to is efficient access of target memory.
> [Otherwise we wouldn't have things like the dcache,
> trust-readonly, explicit caching support for stack requests, etc.]
> 
>  >  It may look like one,
>  > but it's not super clear to me.  Yes, we might have a similar problem
>  > caused by lots of tiny reads from the target during prologue analysis.
>  > But the approach there might be different from the right approach for
>  > disassembly, or we could also come to the conclusion the problem
>  > there is not exactly the same.
>  >
>  > > Question: How much more of the general problem can we fix without
>  > > having a fix baked into the disassembler?
>  > 
>  > The disassembly use case is one where GDB is being
>  > told by the user "treat this range of addresses that I'll be
>  > reading sequencially, as code".  If that happens to trip on some
>  > memory mapped registers or some such, then it's garbage-in,
>  > garbage-out, it was the user's fault.
> 
> Though if gdb doesn't provide a range to constrain the caching,
> the caching doesn't come into play in the current version of the patch
> (the patch still avoids trying to prefetch too much).
> In the case of, e.g., "disas main" gdb does provide a range.

Yes, that's what I was talking about.

> The patch makes "disas main" efficient but doesn't help "x/5i main".
> [No claim is made that improving the latter case is necessarily as easy,
> but I think there is a case to be made that this patch fixes
> a specific case (disas) of a specific case (reading code memory
> for disassembly) of a general problem (reading target memory) :-).]

Thanks, that's clearer.

x/5i isn't a pressing use case, like "disassembly", IME.
Where disassembly slowness gets noticeable, is with frontends (like
eclipse) that display a memory disassemble window, that gets
updated/refreshed quite frequently, basically after every
single-step, or user command.  When x/5i is typed by the user
interactively, a 5 reads vs 1 read won't really be noticeable.

TBC, I'm not advocating against a more general fix, if somebody's
going to work on it.  I'd love that.

> Presumably gdb can use function bounds or something else from the
> debug info to constrain the affected memory space for other requests
> so those can be sped up too.

Yeah.

> "b main" on amd64 is instructive.
> The stack align machinery blindly fetches 18 bytes,
> and then prologue skipping ignores that and fetches a piece at a time.
> And we do that twice (once for main from dwarf, once for main from elf).
> 
> (gdb) b main
> Sending packet: $m4007b4,12#5d...Packet received: 554889e5be1c094000bf40106000e8e1feff
> Sending packet: $m4007b4,1#2b...Packet received: 55
> Sending packet: $m4007b5,3#2e...Packet received: 4889e5
> Sending packet: $m4007b4,12#5d...Packet received: 554889e5be1c094000bf40106000e8e1feff
> Sending packet: $m4007b4,1#2b...Packet received: 55
> Sending packet: $m4007b5,3#2e...Packet received: 4889e5
> Sending packet: $m4007b8,1#2f...Packet received: be
> Breakpoint 1 at 0x4007b8: file hello.cc, line 6.
> 
> There's only a handful of calls to gdbarch_skip_prologue.
> They could all be updated to employ whatever caching/prefetching
> is appropriate.

Sure.

>  > If I were to try one, I think it would be along the lines of
>  > a new TARGET_OBJECT_DISASM_MEMORY, and somehow pass more info down
>  > the target_xfer interface so that the the core memory reading code
>  > handles the caching.  Probably, that'd be done with a new pair of
>  > 'begin/end code caching' functions that would be called at the
>  > appropriate places.  The new code in dis_asm_read_memory would
>  > then be pushed to target.c, close to where stack cache is handled.
> 
> How hard would it be to do that now?

I'm not personally going to do it now, so "impossible" for me.  :-)
But if Yao or Hafiz, or Taimoor or someone else can spend the
effort, then of course that'd be great.

>  > The main point there should be consensus on, is that a caching
>  > scheme is a better solution for the disassembly use case, than trusting
>  > read only sessions is, for the later doesn't have the problem with
>  > self-modifying code, and, in addition, it also speeds up disassembling
>  > when there is _no_ corresponding binary/'text section'.
> 
> How often do we see bug reports of slow disassembly when there is no
> corresponding binary/text section?

The original use case that motivated this caching, that is,
a frontend that has a disassembly window that gets
refreshed/updated very frequently, should trigger that.

> Plus self modifying code won't always provide the bounds necessary
> to trigger the prefetching this patch does (not all jitters use
> gdb's jit interface to register all instances of self-modified code).

But "disassemble $random_address, +400" (or the MI equivalent) does.

If we don't have bounds to work with, then, what could we do,
if we're playing it safe?  What are you suggesting?

> Also, I feel I need to point out that we rejected an early version
> of Yao's varobj patch because it used casting to effect baseclass/subclassing.

I haven't followed that thread.  Was subclassing really the reason, or
was it because subclassing (whatever the language) didn't make sense
for that particular case?  We certainly use baseclass/subclassing today
in several places.  E.g., breakpoint.c.

-- 
Pedro Alves

      parent reply	other threads:[~2013-10-16 12:02 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-10-10 13:14 Abid, Hafiz
2013-10-10 13:34 ` Pedro Alves
2013-10-10 13:57   ` Abid, Hafiz
2013-10-10 14:52     ` Pedro Alves
2013-10-10 15:13       ` Pedro Alves
2013-10-11 16:45         ` Abid, Hafiz
2013-10-11 21:12           ` Pedro Alves
2013-10-11 21:34 ` Doug Evans
2013-10-14  9:37   ` Abid, Hafiz
2013-10-14 14:42   ` Pedro Alves
2013-10-16  1:16     ` Doug Evans
2013-10-16  7:53       ` Yao Qi
2013-10-16 12:08         ` Pedro Alves
2013-10-16 13:23           ` Yao Qi
2013-10-18 10:24           ` Yao Qi
2013-10-18 18:25             ` Pedro Alves
2013-10-19  1:55               ` Yao Qi
2013-10-25  7:56                 ` Doug Evans
2013-10-16 12:02       ` Pedro Alves [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=525E8033.7060204@redhat.com \
    --to=palves@redhat.com \
    --cc=Hafiz_Abid@mentor.com \
    --cc=Taimoor_Mirza@mentor.com \
    --cc=dje@google.com \
    --cc=gdb-patches@sourceware.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).