public inbox for gdb@sourceware.org
 help / color / mirror / Atom feed
* Always cache memory and registers
@ 2003-06-22 22:26 Andrew Cagney
  2003-06-22 22:34 ` Daniel Jacobowitz
  2003-06-23 19:02 ` Discrepency between gdbarch_frame_locals_address and get_frame_locals_address? Paul N. Hilfinger
  0 siblings, 2 replies; 7+ messages in thread
From: Andrew Cagney @ 2003-06-22 22:26 UTC (permalink / raw)
  To: gdb

Hello,

Think back to the rationale for GDB simply flushing its entire state 
after the user modifies a memory or register.   No matter how inefficent 
that update is, it can't be any worse than the full refresh needed after 
a single step.  All effort should be put into making single step fast, 
and not into making read-modifywrite fast.

I think I've just found a similar argument that can be used to justify 
always enabling a data cache.  GDB's dcache is currently disabled (or at 
least was the last time I looked :-).  The rationale was that the user, 
when inspecting in-memory devices, would be confused if repeated reads 
did not reflect the devices current register values.

The problem with this is GUIs.

A GUI can simultaneously display multiple views of the same memory 
region.  Should each of those displays generate separate target reads 
(with different values and side effects) or should they all share a 
common cache?

I think the later because it is impossible, from a GUI, to predict or 
control the number of reads that request will trigger.  Hence I'm 
thinking that a data cache should be enabled by default.

The only proviso being that the the current cache and target vector 
would need to be modified so that the cache only ever requested the data 
needed, leaving it to the target to supply more if available (much like 
registers do today).  The current dcache doesn't do this, it instead 
pads out small reads :-(

One thing that could be added to this is the idea of a sync point.
When supplying data, the target could mark it as volatile.  Such 
volatile data would then be drawn from the cache but only up until the 
next sync point.  After that a fetch would trigger a new read. 
Returning to the command line, for instance, could be a sync point. 
Individual x/i commands on a volatile region would be separated by sync 
points, and hence would trigger separate reads.

Thoughts.  I think this provides at least one techical reason for 
enabling the cache.

enjoy,
Andrew

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Always cache memory and registers
  2003-06-22 22:26 Always cache memory and registers Andrew Cagney
@ 2003-06-22 22:34 ` Daniel Jacobowitz
  2003-06-22 22:55   ` Andrew Cagney
  2003-06-23 19:02 ` Discrepency between gdbarch_frame_locals_address and get_frame_locals_address? Paul N. Hilfinger
  1 sibling, 1 reply; 7+ messages in thread
From: Daniel Jacobowitz @ 2003-06-22 22:34 UTC (permalink / raw)
  To: Andrew Cagney; +Cc: gdb

On Sun, Jun 22, 2003 at 06:26:13PM -0400, Andrew Cagney wrote:
> Hello,
> 
> Think back to the rationale for GDB simply flushing its entire state 
> after the user modifies a memory or register.   No matter how inefficent 
> that update is, it can't be any worse than the full refresh needed after 
> a single step.  All effort should be put into making single step fast, 
> and not into making read-modifywrite fast.
> 
> I think I've just found a similar argument that can be used to justify 
> always enabling a data cache.  GDB's dcache is currently disabled (or at 
> least was the last time I looked :-).  The rationale was that the user, 
> when inspecting in-memory devices, would be confused if repeated reads 
> did not reflect the devices current register values.
> 
> The problem with this is GUIs.
> 
> A GUI can simultaneously display multiple views of the same memory 
> region.  Should each of those displays generate separate target reads 
> (with different values and side effects) or should they all share a 
> common cache?
> 
> I think the later because it is impossible, from a GUI, to predict or 
> control the number of reads that request will trigger.  Hence I'm 
> thinking that a data cache should be enabled by default.

Good reasoning.  I like it.

> The only proviso being that the the current cache and target vector 
> would need to be modified so that the cache only ever requested the data 
> needed, leaving it to the target to supply more if available (much like 
> registers do today).  The current dcache doesn't do this, it instead 
> pads out small reads :-(

It needs tweaking for other reasons too.  It should probably have a
much higher threshold before it starts throwing out data, for one
thing.

Padding out small reads isn't such a bad idea.  It generally seems to
be the latency that's a real problem, esp. for remote targets.  I think
both NetBSD and GNU/Linux do fast bulk reads native now?  I'd almost
want to increase the padding.

> One thing that could be added to this is the idea of a sync point.
> When supplying data, the target could mark it as volatile.  Such 
> volatile data would then be drawn from the cache but only up until the 
> next sync point.  After that a fetch would trigger a new read. 
> Returning to the command line, for instance, could be a sync point. 
> Individual x/i commands on a volatile region would be separated by sync 
> points, and hence would trigger separate reads.
> 
> Thoughts.  I think this provides at least one techical reason for 
> enabling the cache.

Interesting idea there.  I'm not quite sure how much work vs. return it
would be.

-- 
Daniel Jacobowitz
MontaVista Software                         Debian GNU/Linux Developer

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Always cache memory and registers
  2003-06-22 22:34 ` Daniel Jacobowitz
@ 2003-06-22 22:55   ` Andrew Cagney
  2003-06-23  3:57     ` Daniel Jacobowitz
  0 siblings, 1 reply; 7+ messages in thread
From: Andrew Cagney @ 2003-06-22 22:55 UTC (permalink / raw)
  To: Daniel Jacobowitz; +Cc: gdb


>> The only proviso being that the the current cache and target vector 
>> would need to be modified so that the cache only ever requested the data 
>> needed, leaving it to the target to supply more if available (much like 
>> registers do today).  The current dcache doesn't do this, it instead 
>> pads out small reads :-(
> 
> 
> It needs tweaking for other reasons too.  It should probably have a
> much higher threshold before it starts throwing out data, for one
> thing.
> 
> Padding out small reads isn't such a bad idea.  It generally seems to
> be the latency that's a real problem, esp. for remote targets.  I think
> both NetBSD and GNU/Linux do fast bulk reads native now?  I'd almost
> want to increase the padding.

No, other way.

Having GDB pad out small reads can be a disaster - read one too many 
bytes and ``foomp''.  This is one of the reasons why the dcache was 
never enabled.

However, it is totally reasonable for the target (not GDB) to supply 
megabytes of memory mapped data when GDB only asked for a single byte! 
The key point is that it is the target that makes any padding / transfer 
decisions, and not core GDB.  If the remote target fetches too much data 
and `foomp' then, hey not our fault, we didn't tell it to read that 
address :-^

>> One thing that could be added to this is the idea of a sync point.
>> When supplying data, the target could mark it as volatile.  Such 
>> volatile data would then be drawn from the cache but only up until the 
>> next sync point.  After that a fetch would trigger a new read. 
>> Returning to the command line, for instance, could be a sync point. 
>> Individual x/i commands on a volatile region would be separated by sync 
>> points, and hence would trigger separate reads.
>> 
>> Thoughts.  I think this provides at least one techical reason for 
>> enabling the cache.
> 
> 
> Interesting idea there.  I'm not quite sure how much work vs. return it
> would be.

There needs to at least be a contingency plan (if someone finds a 
technical problem :-).  I also think its relatively easy to implement. 
Reach a sync point, flush volatile data from the cache.

Andrew


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Always cache memory and registers
  2003-06-22 22:55   ` Andrew Cagney
@ 2003-06-23  3:57     ` Daniel Jacobowitz
  2003-06-23 14:13       ` Andrew Cagney
  0 siblings, 1 reply; 7+ messages in thread
From: Daniel Jacobowitz @ 2003-06-23  3:57 UTC (permalink / raw)
  To: Andrew Cagney; +Cc: gdb

On Sun, Jun 22, 2003 at 06:54:48PM -0400, Andrew Cagney wrote:
> 
> >>The only proviso being that the the current cache and target vector 
> >>would need to be modified so that the cache only ever requested the data 
> >>needed, leaving it to the target to supply more if available (much like 
> >>registers do today).  The current dcache doesn't do this, it instead 
> >>pads out small reads :-(
> >
> >
> >It needs tweaking for other reasons too.  It should probably have a
> >much higher threshold before it starts throwing out data, for one
> >thing.
> >
> >Padding out small reads isn't such a bad idea.  It generally seems to
> >be the latency that's a real problem, esp. for remote targets.  I think
> >both NetBSD and GNU/Linux do fast bulk reads native now?  I'd almost
> >want to increase the padding.
> 
> No, other way.
> 
> Having GDB pad out small reads can be a disaster - read one too many 
> bytes and ``foomp''.  This is one of the reasons why the dcache was 
> never enabled.

What do you mean?  I would have thought this was the responsibility of
the stub to manage...

> However, it is totally reasonable for the target (not GDB) to supply 
> megabytes of memory mapped data when GDB only asked for a single byte! 
> The key point is that it is the target that makes any padding / transfer 
> decisions, and not core GDB.  If the remote target fetches too much data 
> and `foomp' then, hey not our fault, we didn't tell it to read that 
> address :-^

Oh, I see what you're getting at.  Hmm, this would require fudging the
interfaces a bit, in order for the target to return excess memory.  It
could be done.  Hm....

-- 
Daniel Jacobowitz
MontaVista Software                         Debian GNU/Linux Developer

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Always cache memory and registers
  2003-06-23  3:57     ` Daniel Jacobowitz
@ 2003-06-23 14:13       ` Andrew Cagney
  0 siblings, 0 replies; 7+ messages in thread
From: Andrew Cagney @ 2003-06-23 14:13 UTC (permalink / raw)
  To: Daniel Jacobowitz; +Cc: gdb

> On Sun, Jun 22, 2003 at 06:54:48PM -0400, Andrew Cagney wrote:
> 
>> 
> 
>> >>The only proviso being that the the current cache and target vector 
>> >>would need to be modified so that the cache only ever requested the data 
>> >>needed, leaving it to the target to supply more if available (much like 
>> >>registers do today).  The current dcache doesn't do this, it instead 
>> >>pads out small reads :-(
> 
>> >
>> >
>> >It needs tweaking for other reasons too.  It should probably have a
>> >much higher threshold before it starts throwing out data, for one
>> >thing.
>> >
>> >Padding out small reads isn't such a bad idea.  It generally seems to
>> >be the latency that's a real problem, esp. for remote targets.  I think
>> >both NetBSD and GNU/Linux do fast bulk reads native now?  I'd almost
>> >want to increase the padding.
> 
>> 
>> No, other way.
>> 
>> Having GDB pad out small reads can be a disaster - read one too many 
>> bytes and ``foomp''.  This is one of the reasons why the dcache was 
>> never enabled.
> 
> 
> What do you mean?  I would have thought this was the responsibility of
> the stub to manage...

>> However, it is totally reasonable for the target (not GDB) to supply 
>> megabytes of memory mapped data when GDB only asked for a single byte! 
>> The key point is that it is the target that makes any padding / transfer 
>> decisions, and not core GDB.  If the remote target fetches too much data 
>> and `foomp' then, hey not our fault, we didn't tell it to read that 
>> address :-^
> 
> 
> Oh, I see what you're getting at.  Hmm, this would require fudging the
> interfaces a bit, in order for the target to return excess memory.  It
> could be done.  Hm....

Well, given that the target interface is up for an overhaul anyway, this 
fudging is, er, in the hoise.  supply_register(), for instance, needs to 
get parameterized with something meaningful.

In terms of the remote protocol, nothing saying that a T packet can't 
return memory, or that a register/memory fetch can't respond with extra 
info.

For the target vector, my guess is something like:

	target->fetch{register,memory} (<what>, supply-methods)

so that a target can supply anything for a given memory/register request.

Andrew


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Discrepency between gdbarch_frame_locals_address and get_frame_locals_address?
  2003-06-22 22:26 Always cache memory and registers Andrew Cagney
  2003-06-22 22:34 ` Daniel Jacobowitz
@ 2003-06-23 19:02 ` Paul N. Hilfinger
  2003-06-23 19:47   ` Andrew Cagney
  1 sibling, 1 reply; 7+ messages in thread
From: Paul N. Hilfinger @ 2003-06-23 19:02 UTC (permalink / raw)
  To: ac131313; +Cc: gdb


Andrew,

Whilst poking around on related things, I observed that, at least on
Linux, the values of gdbarch_frame_locals_address and
get_frame_locals_address disagree.  The latter appears to be correct,
since it is used in read_var_value in what I assume is the intended
way (add SYMBOL_VALUE to get_frame_locals_address (frame) to get
variable address).  Would you like a patch, or is there a subtle
point here that I am missing?

Thanks.

Paul

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Discrepency between gdbarch_frame_locals_address and get_frame_locals_address?
  2003-06-23 19:02 ` Discrepency between gdbarch_frame_locals_address and get_frame_locals_address? Paul N. Hilfinger
@ 2003-06-23 19:47   ` Andrew Cagney
  0 siblings, 0 replies; 7+ messages in thread
From: Andrew Cagney @ 2003-06-23 19:47 UTC (permalink / raw)
  To: Hilfinger; +Cc: gdb

> Andrew,
> 
> Whilst poking around on related things, I observed that, at least on
> Linux,

Um, which GNU/Linux, which architecture, and how recent a GDB?  Lets 
assume i386 and gdb_6_0-branch.

> the values of gdbarch_frame_locals_address and
> get_frame_locals_address disagree.  The latter appears to be correct,
> since it is used in read_var_value in what I assume is the intended
> way (add SYMBOL_VALUE to get_frame_locals_address (frame) to get
> variable address).

Yes.  get_frame_locals_address returns what debug info thinks of as the 
frame base.  Local variables being specified as offsets from that 
address.  Just don't confuse it with gdb's [deprecated] get_frame_base :-(

> Would you like a patch, or is there a subtle
> point here that I am missing?

There is likely a subtle point^D^D^Dproblem vis:  The backward 
compatible path is
	get_frame_locals_address
	-> [deprecated] gdbarch_frame_locals_address
	-> get_frame_base
	-> get_frame_id.stack_addr
and, so pre-frame code has identical values for both.  New code, 
however, has a different frame ID .stack_addr and frame-locals-address.

You've probably found code using the wrong one.

Andrew


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2003-06-23 19:47 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-06-22 22:26 Always cache memory and registers Andrew Cagney
2003-06-22 22:34 ` Daniel Jacobowitz
2003-06-22 22:55   ` Andrew Cagney
2003-06-23  3:57     ` Daniel Jacobowitz
2003-06-23 14:13       ` Andrew Cagney
2003-06-23 19:02 ` Discrepency between gdbarch_frame_locals_address and get_frame_locals_address? Paul N. Hilfinger
2003-06-23 19:47   ` Andrew Cagney

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).