public inbox for systemtap@sourceware.org
 help / color / mirror / Atom feed
* DTrace for Linux
@ 2005-12-14 21:49 Stone, Joshua I
  2005-12-14 23:51 ` Martin Hunt
  0 siblings, 1 reply; 8+ messages in thread
From: Stone, Joshua I @ 2005-12-14 21:49 UTC (permalink / raw)
  To: systemtap

This is getting a lot of buzz on the blogosphere:
http://blogs.sun.com/roller/page/ahl?entry=dtrace_for_linux

Using BrandZ and Dtrace, he was able to root-cause some problem with the
Linux "top".

Looking through his scripts, I think that a lot of his analysis could be
done *today* with SystemTap.  There's some user-mode stuff that we can't
do, but most of it seems ok.

It would be nice if we could duplicate his results using SystemTap, to
show our current maturity...

Comments?

Josh

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: DTrace for Linux
  2005-12-14 21:49 DTrace for Linux Stone, Joshua I
@ 2005-12-14 23:51 ` Martin Hunt
  2005-12-15  1:55   ` Kevin Stafford
                     ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Martin Hunt @ 2005-12-14 23:51 UTC (permalink / raw)
  To: Stone, Joshua I; +Cc: systemtap

On Wed, 2005-12-14 at 10:47 -0800, Stone, Joshua I wrote:
> This is getting a lot of buzz on the blogosphere:
> http://blogs.sun.com/roller/page/ahl?entry=dtrace_for_linux

OK, I gave it a quick try. To do this, I needed the system_call tapset,
which (like the rest of systemtap) is a rapidly changing work-in-
progress. Unfortunately it doesn't run of my x86_64 dev system, so I had
to switch to an x86.

First, to get a quick look at what top was doing, I ran this script:

>stap -c top sys1.stp
-------
global called 

probe kernel.syscall.* {
	if (execname() == "top")
		called[name] += 1
}

probe end {
        foreach (n in called-) 
                printf("top called %s\t%d times\n", n, called[n])       
}
--------

When I exited top, it gave me the following output:
top called read 1551 times
top called open 1433 times
top called creat        1433 times
top called close        1430 times
top called stat64       700 times
top called alarm        285 times
top called fcntl64      253 times
top called rt_sigaction 201 times
top called mmap2        33 times
top called fstat64      30 times
top called write        23 times
top called ioctl        20 times
top called munmap       17 times
top called lseek        15 times
top called getdents64   12 times
top called llseek       10 times
top called access       8 times
top called gettimeofday 6 times
top called select       6 times
[...]

That looks fine. And very much like what Sun got. So on to some crude
profiling:

First thing I notice is that the tapset sets "name" to different values
for the entry and exit probes. This makes matching them up a bit of a
headache. So I edited the tapset and removed the ".return" on the values
for "name" in the return probes. If no one has a reason why they should
be there then we should make this change permanent.

Unfortunately, then I got an error on from the syscall tapset on
execve.return.
semantic error: no match for probe point
         while: resolving probe point kernel.function
("sys_execve").return

So I commented out the whole sysexecve section in the tapset and tried
again.

----
global called, ttime

probe kernel.syscall.* {
	if (execname() == "top")
		called[name] = gettimeofday_us()
}
probe kernel.syscall.*.return {
	if (execname() == "top") 
		ttime[name] +=  gettimeofday_us() - called[name]
}

probe end {
        foreach ([n] in ttime-)
                printf("%s took %d usecs\n", n, ttime[n])
}
----
This time I got the following output:
select took 100037096 usecs
open took 93042 usecs
read took 79974 usecs
creat took 66253 usecs
close took 30882 usecs
stat64 took 28486 usecs
write took 7171 usecs
fcntl64 took 6297 usecs
alarm took 5958 usecs
rt_sigaction took 4017 usecs
getdents64 took 2289 usecs
ioctl took 747 usecs
lseek took 353 usecs
access took 324 usecs
llseek took 253 usecs
fstat64 took 238 usecs
mmap2 took 186 usecs
munmap took 171 usecs
gettimeofday took 124 usecs
time took 117 usecs
socketcall took 76 usecs
mprotect took 30 usecs
connect took 24 usecs
socket took 16 usecs
[...]

OK, now this is much different than what Sun got. There is certainly no
munmap() problem here. Without detail about Sun's emulation environment
and what version of linux they are running, I cannot even guess what the
problem is. Maybe one clue is this odd comment "[...]what I've heard my
colleague, Bryan, refer to as the "top problem": your system is slow, so
you run top. What's the top process? Top!"  

I've never seen "top" the top process in anything but an extremely idle
system.

Martin





^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: DTrace for Linux
  2005-12-14 23:51 ` Martin Hunt
@ 2005-12-15  1:55   ` Kevin Stafford
  2005-12-16  1:05   ` Andi Kleen
  2005-12-16  5:02   ` weikong
  2 siblings, 0 replies; 8+ messages in thread
From: Kevin Stafford @ 2005-12-15  1:55 UTC (permalink / raw)
  To: Martin Hunt; +Cc: systemtap

Martin Hunt wrote:

>[...]
>Unfortunately it doesn't run of my x86_64 dev system, so I had
>to switch to an x86.
>[...]
>  
>
The syscall tapset has been tested on x86_64 with kernel 2.6.9-24.ELsmp.
I found that differences in architecture, kernel version, or 
configuration can
cause kernel.syscall.* { } to fail during compilation. The directory 
structure
has been altered in CVS to provide tapsets only for kernels & architectures
that I have successfully instrumented & tested.

>First thing I notice is that the tapset sets "name" to different values
>for the entry and exit probes. This makes matching them up a bit of a
>headache. So I edited the tapset and removed the ".return" on the values
>for "name" in the return probes. If no one has a reason why they should
>be there then we should make this change permanent.
>  
>
This is fine so long as probe alias wildcards are limited to expanding 
only within
the . (dot) characters. (See BZ # 1928)
For example: if kernel.syscall.* expands to all entry point aliases 
(kernel.syscall.*)
AND all return point aliases (kernel.syscall.*.return) then reference to 
"name"
would not differentiate between probes and return probes.

>Unfortunately, then I got an error on from the syscall tapset on
>execve.return.
>semantic error: no match for probe point
>         while: resolving probe point kernel.function
>("sys_execve").return
>
>So I commented out the whole sysexecve section in the tapset and tried
>again.
>  
>
This could be caused by the aforementioned differences in kernel 
version, arch,
or config options. More likely, this is due to the sys_execve.return 
being on the
translator blacklist. Jim submitted a patch that fixed the execve return 
probe issue.
(see http://sourceware.org/ml/systemtap/2005-q4/msg00223.html)

-- 
Kevin Stafford
DES 2 | MS 2M3
Beaverton - OR
Linux Technology Center
IBM Systems & Technology
Phone: 1-503-578-3039
Email: kevinrs@us.ibm.com



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: DTrace for Linux
  2005-12-14 23:51 ` Martin Hunt
  2005-12-15  1:55   ` Kevin Stafford
@ 2005-12-16  1:05   ` Andi Kleen
       [not found]     ` <cd09bdd10512151753t2c5234e2p6e244c24460eea8b@mail.gmail.com>
  2005-12-16  5:02   ` weikong
  2 siblings, 1 reply; 8+ messages in thread
From: Andi Kleen @ 2005-12-16  1:05 UTC (permalink / raw)
  To: Martin Hunt; +Cc: systemtap

Martin Hunt <hunt@redhat.com> writes:
> 
> OK, now this is much different than what Sun got. There is certainly no
> munmap() problem here. Without detail about Sun's emulation environment
> and what version of linux they are running, I cannot even guess what the
> problem is. Maybe one clue is this odd comment "[...]what I've heard my

If I understand BrandZ from the link above correctly it is essentially
running Linux user space on a Solaris kernel.

When munmap is a problem for them and it isn't on native Linux 
it just means munmap is much slower on Solaris than on real Linux.
Now that they know they can hopefully do something about this.

However he makes a point that munmap flushes TLBs and scales
with number of CPUs. So it's possible that he ran it on a bigger
system than you and it cost more CPU time because of that.
But I don't quite buy this argument for top because top
is single threaded and Linux will never do remote TLB flushes
for a single threaded process. 

It should only make a difference for multithreaded programs where
multiple threads sharing the same VM run on different CPUs.

Perhaps Solaris isn't that clever in avoiding TLB flushes(?) although
it is hard to imagine this.

-Andi

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Fwd: DTrace for Linux
       [not found]     ` <cd09bdd10512151753t2c5234e2p6e244c24460eea8b@mail.gmail.com>
@ 2005-12-16  2:24       ` James Dickens
  2005-12-16  3:02       ` Andi Kleen
  1 sibling, 0 replies; 8+ messages in thread
From: James Dickens @ 2005-12-16  2:24 UTC (permalink / raw)
  To: systemtap

okay this was blocked so i'm resending.


---------- Forwarded message ----------
From: James Dickens <jamesd.wi@gmail.com>
Date: Dec 15, 2005 7:53 PM
Subject: Re: DTrace for Linux
To: Andi Kleen <ak@suse.de>
Cc: Martin Hunt <hunt@redhat.com>, "systemtap@sources.redhat.com"
<systemtap@sources.redhat.com>




On 16 Dec 2005 01:55:06 +0100, Andi Kleen <ak@suse.de> wrote:
>  Martin Hunt <hunt@redhat.com> writes:
> >
> > OK, now this is much different than what Sun got. There is certainly no
> > munmap() problem here. Without detail about Sun's emulation environment
> > and what version of linux they are running, I cannot even guess what the
> > problem is. Maybe one clue is this odd comment "[...]what I've heard my
>
> If I understand BrandZ from the link above correctly it is essentially
> running Linux user space on a Solaris kernel.
>
> When munmap is a problem for them and it isn't on native Linux
> it just means munmap is much slower on Solaris than on real Linux.
> Now that they know they can hopefully do something about this.
>
> However he makes a point that munmap flushes TLBs and scales
> with number of CPUs. So it's possible that he ran it on a bigger
> system than you and it cost more CPU time because of that.
> But I don't quite buy this argument for top because top
> is single threaded and Linux will never do remote TLB flushes
> for a single threaded process.
>
> It should only make a difference for multithreaded programs where
> multiple threads sharing the same VM run on different CPUs.
>
> Perhaps Solaris isn't that clever in avoiding TLB flushes(?) although
> it is hard to imagine this.

 okay this is getting to be off topic, but since you asked, and i have
a system running brandz

 Hardware:
 vmware running on a 2.613 GHz system, the vhost has  384 MB allocated to it

 OS: Solaris Express Community Express build patched to run Brandz bits.


 Brandz is running  Centos Linux that was pre-packaged on
http://www.opensolaris.org/os/community/brandz/
 -bash-2.05b# uname -av
 Linux linux1 2.4.21 BrandX fake linux i686 i686 i386 GNU/Linux
 -bash-2.05b#

 -bash-2.05b# top -v
 procps version 2.0.17
 -bash-2.05b#

 # dtrace -n lx-syscall:::entry'/execname == "top"/{ @[probefunc] = count(); }'
 dtrace: description 'lx-syscall:::entry' matched 272 probes
 ^C

   access                                                            8
   fstat64                                                           8
   gettimeofday                                                      8
   gtime                                                             8
   llseek                                                            8
   select                                                            8
   getdents64                                                       32
   lseek                                                            32
   stat64                                                           80
   rt_sigaction                                                    112
   fcntl64                                                         136
   write                                                           152
   alarm                                                           168
   close                                                           256
   read                                                            328
   open                                                            336
 #

 okay looks normal, but no munmap, maybe he was using an older version
of  brandz ( linux userland environment). But lets keep going.

 lx-sys.d
 #!/usr/sbin/dtrace -s

 lx-syscall:::entry
 /execname == "top"/
 {
         self->ts = vtimestamp;
 }

 lx-syscall:::return
 /self->ts/
 {
         @[probefunc] = sum(vtimestamp - self->ts);
         self->ts = 0;
 }


 dtrace: script './lx-sys.d' matched 544 probes
 ^C

   gettimeofday                                                  32490
   llseek                                                        47045
   access                                                        94997
   gtime                                                        111993
   fstat64                                                      129067
   lseek                                                        209131
   select                                                       276210
   alarm                                                        916786
   stat64                                                      1093732
   fcntl64                                                     1143852
   getdents64                                                  1625671
   rt_sigaction                                                2227934
   write                                                       2336432
   close                                                       2749497
   read                                                        5171047
   open                                                        5569743
 #

 okay normal,  but now  open is using the most time, this is because
for process it opens and reads data and close, if top help them open
and just read an open filehandle, it would be a lighter procces,
especailly as more and more proccesses are running.



 lx-sys2.d  #!/usr/sbin/dtrace -s

lx-syscall:::entry
/execname == "top"/
{
        self->ts = vtimestamp;
}

lx-syscall:::return
/self->ts/
{
        @[probefunc] = sum(vtimestamp - self->ts);

        @["- all syscalls -"] = sum(vtimestamp - self->ts);
        self->ts = 0;
}

sched:::on-cpu
/execname == "top"/
{
        self->on = timestamp;
}

sched:::off-cpu

/self->on/
{
        @["- total -"] = sum(timestamp - self->on);
        self->on = 0;
}


 dtrace: script './lx-sys2.d' matched 550 probes
 ^C

   getegid                                                       16240
   geteuid                                                       20344
   getgid                                                        24503
   getuid                                                        38199
   gettimeofday                                                  49445
   rt_sigprocmask                                                54397
   llseek                                                        57766
   gtime                                                         83750
   newuname                                                     115006
   brk                                                          126012
   set_thread_area                                              137075
   nanosleep                                                    200987
   access                                                       253384
   lseek                                                        280470
   mmap2                                                        284076
   munmap                                                       441192
   select                                                       544337
   fstat64                                                      595619
   old_mmap                                                     595924
   alarm                                                        891482
   ioctl                                                       1060726
   stat64                                                      1602166
   fcntl64                                                     1958110
   rt_sigaction                                                2526971
   getdents64                                                  2752676
   write                                                       3573395
   socketcall                                                  4072963
   close                                                       4954184
   open                                                        7627755
   read                                                        8364549
   - all syscalls -                                           43303703
   - total -                                                  56738308
 #


 okay the rest don't apply since either he had a wierd version of top
or some other bug that has since been fixed. Maybe a race because he
has a SMP machine and I don't.

 James Dickens
 uadmin.blogspot.com



> -Andi
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: DTrace for Linux
       [not found]     ` <cd09bdd10512151753t2c5234e2p6e244c24460eea8b@mail.gmail.com>
  2005-12-16  2:24       ` Fwd: " James Dickens
@ 2005-12-16  3:02       ` Andi Kleen
  1 sibling, 0 replies; 8+ messages in thread
From: Andi Kleen @ 2005-12-16  3:02 UTC (permalink / raw)
  To: James Dickens; +Cc: Andi Kleen, Martin Hunt, systemtap

>   gettimeofday                                                  32490
>   llseek                                                        47045
>   access                                                        94997
>   gtime                                                        111993
>   fstat64                                                      129067
>   lseek                                                        209131
>   select                                                       276210
>   alarm                                                        916786
>   stat64                                                      1093732
>   fcntl64                                                     1143852
>   getdents64                                                  1625671
>   rt_sigaction                                                2227934
>   write                                                       2336432
>   close                                                       2749497
>   read                                                        5171047
>   open                                                        5569743

This looks about expected. top uses /proc a lot and /proc is very
open/read/close/getdents64 intensive.  I suppose the writes are the console output.

Why it uses rt_sigaction that often I don't know but normally
this call should be fairly cheap - at least it is on Linux AFAIK.

Your numbers suggest it isn't quite on Solaris (it takes
a lot of time for not that many calls), perhaps
that is one area where the Sun folks can still improve.

-Andi

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: DTrace for Linux
  2005-12-14 23:51 ` Martin Hunt
  2005-12-15  1:55   ` Kevin Stafford
  2005-12-16  1:05   ` Andi Kleen
@ 2005-12-16  5:02   ` weikong
  2005-12-16  7:55     ` Martin Hunt
  2 siblings, 1 reply; 8+ messages in thread
From: weikong @ 2005-12-16  5:02 UTC (permalink / raw)
  To: hunt; +Cc: systemtap

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="gb18030", Size: 981 bytes --]

ÔÚ ÐÇÆÚËÄ 15 Ê®¶þÔ 2005 07:32£¬Martin Hunt дµÀ£º
> OK, now this is much different than what Sun got. There is certainly no
> munmap() problem here. Without detail about Sun's emulation environment
> and what version of linux they are running, I cannot even guess what the
> problem is. Maybe one clue is this odd comment "[...]what I've heard my
> colleague, Bryan, refer to as the "top problem": your system is slow, so
> you run top. What's the top process? Top!"

The result, Because some system call (such sys_select, sys_open, device 
relative system call) are may yield cpu for some reason, than go schedule, 
when this system call return ,the time calculate like your function may 
bigger than the real time. So we must minus the yield schedule time.

I write one to calculate each process's system call time and user function 
time for comparison, But there are some bug, 
Hope for correct.

Attachment 1: cmpcall.stp
Attachment 2: result

-Wei Kong

[-- Attachment #2: cmpcall.stp --]
[-- Type: text/plain, Size: 4547 bytes --]

#!stap -g 
# BUG 1: The "log" function always throw "segment error", I do not konw why. So I "#" them, and add the title in the result
#        by myself.
# BUG 2: I can not enter the switch_to function, So I use "schedule" instead, but it result may not very precision.
# BUG 3: "delete the global variable", it does not work. Let me so sad. 

global syscalls_count		#each syscall,s counts.  key: syscall
global syscalls			#record all syscall,s name and total time.  key: syscall
global syscalls_pid_count	#all syscall counts in one pid.  key: pid syscall
global syscalls_pid_time	#each pid && syscall running time.   key:pid syscall 
global pids			#record all pid,s name and total time.  key: pid command

global time_start, time_end     #counts running time for syscall
global time_last, time_now	#counts running time for each process schedule 
global pid_count 		#each pid schedule count.  key: pid command
global pid_time       		#each pid schedult total time.  key: pid command

global yield_time		#minus the yield time
global yield_name 		
global yield_flag

function syscall_name:string () %{
        char *str, buff[80];
        char *tok;
        str = buff;
        strlcpy(str, CONTEXT->probe_point, sizeof(buff));
        tok = strsep(&str, "\"");
        tok = strsep(&str, "@");
        sprintf(str, "%-25s", tok);
        strlcpy(THIS->__retvalue, str, MAXSTRINGLEN);
%}

function syscall_time( )
{
	time_end[ pid( ), syscall_name( ) ] = gettimeofday_us( )
	# time = time_end - time_start, and minus the yield_time
	run_time = time_end[ pid( ), syscall_name( ) ] - time_start[ pid( ), syscall_name( ) ] - yield_time[ pid( ), syscall_name( ) ]	

	syscalls[ syscall_name( ) ] += run_time
	pids[ pid( ), execname( ) ] += run_time
	syscalls_pid_time[ pid( ), syscall_name( ) ] += run_time

	yield_time[ pid( ), syscall_name( ) ] = 0	#delete yield time&&name
	yield_name[ pid( ), syscall_name( ) ] = 0
	yield_flag[ pid( ), syscall_name( ) ] = 0
}
	
function syscall_times( ) 
{
	syscalls_pid_count[ pid( ), syscall_name( ) ]++		
	syscalls_count[ syscall_name( ) ]++

	yield_name[ pid( ), syscall_name( ) ] = 1	#add yield name
	yield_time[ pid( ), syscall_name( ) ] = 0

	time_start[ pid( ), syscall_name( ) ] = gettimeofday_us( )
}


function print_cmpcall( ) 
{
	#log( "--------------------------------------------------------------" )
	#log( "--------------------------------------------------------------" )
	#log( "PID\tCOUNTS\tTIME(us)\tSYSCALL(us)\tUSERFUNC(us)\tCMD (In 5s)" )
	foreach( [ pid, cmd ] in pid_time- )
	{
		if( pid == 0 )
			continue;
		log( string( pid ) . "\t" . string( pid_count[ pid, cmd ] ) . "\t" . string( pid_time[ pid, cmd ] ) . "\t\t" . string( pids[ pid, cmd ] ) . "\t\t" . string( pid_time[ pid, cmd ] - pids[ pid, cmd ] ) . "\t\t" . cmd )
		
	}

	#log( "----------------------------------------" )
	#log( "PID\tSYSCALL\t\t\t\tTIMES\tTIME(us)\tCOMMAND" )
	foreach( [ pid, syscall ] in syscalls_pid_count )
	{
		if( syscalls_pid_count[ pid,syscall ] == 0 )
			continue;
		log( string( pid ) . "\t" . syscall . "\t" . string( syscalls_pid_count[ pid, syscall ] ) . "\t" . string( syscalls_pid_time[ pid, syscall ] ) . "\t\t" )
	}

	#log( "---------------------------------------" )
	#log( "SYSCALL\t\t\t\tTIMES\tTIME(us)\tAVERAGE" )
	lcnt = 1
	foreach( syscall in syscalls_count- ) 
	{
		log( syscall . "\t" . string( syscalls_count[ syscall ] ) . "\t" . string( syscalls[ syscall ] ) )
		if ( lcnt++ == 10 )
			break;
	}

	#Debug Information#
	#foreach( [ sid, sname ] in yield_time )
	#	log( "Id:" . string( sid ) . "\t name:" . sname . "\t" . string( yield_time[ sid, sname ] ) )

	reset_syscalls( )
}

function reset_syscalls( ) 
{
	delete syscalls 
	delete pids	
	delete syscalls_count
	delete syscalls_pid_count
	delete syscalls_pid_time
	delete yield_time	
	delete yield_name	
	delete yield_flag	
	time_last = gettimeofday_us()
}

probe kernel.function( "schedule" )
{
	time_now = gettimeofday_us( )
	a = time_now - time_last
	pid_time[ pid( ), execname( ) ] += a
	pid_count[ pid( ), execname( ) ]++

	foreach( [ id, sysname ] in yield_name )
	{
		if( yield_flag[ id, sysname ] == 0 )
			yield_flag[ id, sysname ] = 1
		else 	yield_time[ id, sysname ] += a	#Add in yield_time
	}
	time_last = time_now
}

probe kernel.function( "sys_*" ) 
{
	syscall_times( )		#count
}

probe kernel.function( "sys_*" ).return 
{
	syscall_time( )			#time (us)
}

probe begin
{
	log( "BEGIN" )
	time_last = gettimeofday_us()
}

probe timer.jiffies( 1000 ) 
{
	print_cmpcall( )
	exit( )
}

probe end
{
	log( "END" )
}

[-- Attachment #3: result --]
[-- Type: text/plain, Size: 4291 bytes --]

BEGIN
PID     COUNTS  TIME(us)        SYSCALL(us)     USERFUNC(us)    CMD
4665	343	114800		12173		102627		X
4763	64	50988		4438		46550		kdeinit
4772	106	50876		3220		47656		kdeinit
4736	22	45231		11176		34055		kdeinit
4745	133	40954		9127		31827		scim-panel-gtk
4757	85	35850		2733		33117		kdeinit
4761	104	24639		1871		22768		kdeinit
4765	50	24299		2570		21729		kdeinit
4767	41	17037		1175		15862		kdeinit
4770	69	16565		6865		9700		autorun
4749	27	10057		647		9410		artsd
5	82	8685		0		8685		events/0
4740	40	7659		487		7172		scim-launcher
4099	18	4820		1285		3535		hald
3	16	1447		0		1447		ksoftirqd/0
4758	4	1421		435		986		kwrapper
38	4	636		0		636		kapmd
4963	1	605		138		467		stpd
1	2	551		69		482		init
4	4	291		0		291		watchdog/0
75	1	144		0		144		pdflush


SYSCALL				TIMES	TIME(us)
sys_gettimeofday         	1718	13347
sys_ioctl                	662	9935
sys_stat64               	570	9397
sys_select               	485	3941
sys_read                 	428	62
sys_writev               	192	4427
sys_rt_sigprocmask       	104	989
sys_write                	80	3007
sys_setitimer            	64	839
sys_poll                 	48	6092

PID 	SYSCALL 			TIMES   TIME(us)  // Detail each process's system call
4963	sys_read                 	3	62		
4963	sys_fstat64              	1	23		
4963	sys_mmap2                	1	53		
4736	sys_gettimeofday         	65	474		
4736	sys_ioctl                	16	191		
4736	sys_select               	16	1767		
4767	sys_gettimeofday         	66	482		
4767	sys_ioctl                	73	693		
4767	sys_select               	25	0		
4765	sys_gettimeofday         	89	1210		
4765	sys_stat64               	24	612		
4765	sys_ioctl                	76	748		
4765	sys_select               	28	0		
4736	sys_stat64               	544	8732		
4757	sys_gettimeofday         	250	1859		
4757	sys_ioctl                	84	867		
4757	sys_select               	84	0		
4745	sys_gettimeofday         	104	815		
4745	sys_ioctl                	77	993		
4745	sys_poll                 	46	6092		
4740	sys_ioctl                	40	487		
4740	sys_select               	40	0		
4763	sys_gettimeofday         	568	3718		
4763	sys_ioctl                	64	720		
4763	sys_select               	64	0		
4749	sys_gettimeofday         	62	481		
4749	sys_select               	27	0		
4758	sys_kill                 	4	57		
4758	sys_rt_sigprocmask       	8	69		
4758	sys_rt_sigaction         	4	32		
4758	sys_nanosleep            	4	277		
4770	sys_waitpid              	4	346		
4770	sys_wait4                	4	135		
4770	sys_exit                 	4	0		
4770	sys_open                 	12	2081		
4770	sys_ioctl                	24	2453		
4770	sys_close                	12	1389		
4770	sys_read                 	8	0		
4770	sys_readlink             	44	461		
4770	sys_nanosleep            	4	0		
4665	sys_rt_sigprocmask       	96	920		
4665	sys_read                 	192	0		
4665	sys_gettimeofday         	384	3062		
4665	sys_setitimer            	64	839		
4665	sys_ioctl                	16	383		
4665	sys_writev               	192	4427		
4665	sys_select               	144	2174		
4745	sys_read                 	65	0		
4772	sys_gettimeofday         	94	986		
4772	sys_ioctl                	142	1433		
4772	sys_read                 	48	0		
4745	sys_write                	32	1227		
4772	sys_write                	16	801		
4772	sys_select               	39	0		
4761	sys_gettimeofday         	32	223		
4761	sys_ioctl                	48	669		
4761	sys_read                 	80	0		
4767	sys_read                 	16	0		
4761	sys_write                	32	979		
4761	sys_select               	16	0		
4765	sys_read                 	16	0		
4736	sys_newuname             	1	12		
4665	sys_sigreturn            	32	368		
4099	sys_gettimeofday         	4	37		
4099	sys_open                 	2	527		
4099	sys_ioctl                	2	298		
4099	sys_close                	2	407		
4099	sys_time                 	2	16		
4099	sys_poll                 	2	0		
4749	sys_times                	8	74		
4749	sys_rt_sigaction         	4	40		
4749	sys_sigreturn            	4	46		
1	sys_time                 	1	9		
1	sys_stat64               	2	53		
1	sys_fstat64              	1	7		
1	sys_select               	1	0		
4749	sys_time                 	1	6		
4745	sys_select               	1	0		
4757	sys_time                 	1	7		

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: DTrace for Linux
  2005-12-16  5:02   ` weikong
@ 2005-12-16  7:55     ` Martin Hunt
  0 siblings, 0 replies; 8+ messages in thread
From: Martin Hunt @ 2005-12-16  7:55 UTC (permalink / raw)
  To: weikong; +Cc: systemtap

On Fri, 2005-12-16 at 11:02 +0800, weikong@redflag-linux.com wrote:

> # BUG 1: The "log" function always throw "segment error", I do not
> konw why. So I "#" them, and add the title in the result

They work fine for me. 

> #        by myself.
> # BUG 2: I can not enter the switch_to function, So I use "schedule"
> instead, but it result may not very precision.

I'll look into this. I saw something about it earlier this week.

> # BUG 3: "delete the global variable", it does not work. Let me so
> sad. 

Can you explain what you are expecting and what it did?

To get the best results, you should probably filter your probes using
something like "if (target() == pid())" or "if (execname == "top"))"


>                 log( string( pid ) . "\t" . string( pid_count[ pid,
> cmd ] ) . "\t" . string( pid_time[ pid, cmd ] ) . "\t\t" . string
> ( pids[ pid, cmd ] ) . "\t\t" . string( pid_time[ pid, cmd ] - pids
> [ pid, cmd ] ) . "\t\t" . cmd )

It would be best to use printf() instead of log().


> probe timer.jiffies( 1000 ) 
> {
>         print_cmpcall( )
>         exit( )
> }

It you use a timer to print out a large amount of information, like you
do here, it will cause some inaccuracy in your data due to the data
structures being locked for a long time while they are printed.  If you
are only going to print once, it would be best to simply move the
print_cmpcall() to probe end.



^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2005-12-16  5:37 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2005-12-14 21:49 DTrace for Linux Stone, Joshua I
2005-12-14 23:51 ` Martin Hunt
2005-12-15  1:55   ` Kevin Stafford
2005-12-16  1:05   ` Andi Kleen
     [not found]     ` <cd09bdd10512151753t2c5234e2p6e244c24460eea8b@mail.gmail.com>
2005-12-16  2:24       ` Fwd: " James Dickens
2005-12-16  3:02       ` Andi Kleen
2005-12-16  5:02   ` weikong
2005-12-16  7:55     ` Martin Hunt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).