public inbox for systemtap@sourceware.org
 help / color / mirror / Atom feed
From: Josh Stone <jistone@redhat.com>
To: Jake Maul <jakemaul@gmail.com>
Cc: William Cohen <wcohen@redhat.com>, systemtap@sourceware.org
Subject: Re: Linux VFS cache hit rate script
Date: Mon, 25 Apr 2011 22:53:00 -0000	[thread overview]
Message-ID: <4DB5FB66.8080600@redhat.com> (raw)
In-Reply-To: <4DB0C1AD.6080802@redhat.com>

On 04/21/2011 04:45 PM, Josh Stone wrote:
> On 04/21/2011 04:01 PM, Jake Maul wrote:
>>       2 	dev: 0	devname: N/A
>>  762956 	dev: 16	devname: N/A
>>     520 	dev: 18	devname: N/A
>>    4183 	dev: 22	devname: N/A
>>       4 	dev: 23	devname: N/A
>>    1288 	dev: 265289728	devname: dm-0
>>       1 	dev: 27	devname: N/A
>>     872 	dev: 3	devname: N/A
>>    3094 	dev: 5	devname: N/A
>>  380875 	dev: 6	devname: N/A
[...]
>> That bizarrely long dev number might be relevant... or maybe that's
>> just a normal quirk of LVM?
> 
> It's not so bizarre - kernel device numbers are (MAJOR<<20)|MINOR, so
> this turns out to be device 253,0.  That also means all those low dev
> numbers have MAJOR==0, which I think supports my theory that they are
> not normal.

I was thinking about what could be causing such high N/A counts, even on
my nearly-idle laptop.  I'm pretty sure now that these MAJOR==0 are
actually "emulated" filesystems, like sysfs, procfs, etc.  So I don't
think the N/A has anything to do with caching - it's just that there's
literally no block device associated with the request.

Then I think the high counts here are because stap is getting into a
feedback loop as it reads your printfs over debugfs.  A request comes
in, your script printfs it; then stapio reads that printf and copies it
to be read again by a tty emulator or your |sort|uniq pipe -- creating
even more vfs_read events, in a never ending chain.  So you should
probably at least filter stap's own events out of your results with a
condition like:  if (pid() != stp_pid()) { printf... }

Some of the other probes in the vfs tapset deal with the page cache
directly, which I think you'll need to get true vfs caching rates.

Josh

  parent reply	other threads:[~2011-04-25 22:53 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-04-20 19:04 Jake Maul
2011-04-20 20:09 ` William Cohen
2011-04-20 20:46   ` Jake Maul
2011-04-20 20:57     ` Josh Stone
2011-04-20 22:25       ` Jake Maul
2011-04-21 20:17   ` Jake Maul
2011-04-21 22:31     ` William Cohen
2011-04-21 23:02       ` Jake Maul
2011-04-21 23:46         ` Josh Stone
2011-04-22 19:17           ` Jake Maul
2011-04-22 20:28             ` Josh Stone
2011-04-22 21:47               ` Jake Maul
2011-04-25 21:54                 ` Josh Stone
2011-04-26  2:11                   ` Jake Maul
2011-04-25 22:53           ` Josh Stone [this message]
2011-04-26  1:59             ` Jake Maul
2011-04-26  2:44               ` Josh Stone

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4DB5FB66.8080600@redhat.com \
    --to=jistone@redhat.com \
    --cc=jakemaul@gmail.com \
    --cc=systemtap@sourceware.org \
    --cc=wcohen@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).