public inbox for systemtap@sourceware.org
 help / color / mirror / Atom feed
From: fche@redhat.com (Frank Ch. Eigler)
To: Craig Ringer <craig@2ndquadrant.com>
Cc: systemtap@sourceware.org
Subject: Re: SDTs with data types and argument names
Date: Thu, 09 Jan 2020 18:46:00 -0000	[thread overview]
Message-ID: <87blrcfox9.fsf@redhat.com> (raw)
In-Reply-To: <CAMsr+YHUT1z=WfX1p6EUSP8fG2N0ZDtgZcdPmRz_j4p-NKHPjQ@mail.gmail.com>


Hi -

> It'd be great to capture the probe argument names and their data types to
> systemtap when SDTs are generated from a probes.d file. It'd make sense to
> expose this capability for when probes are defined with STAP_PROBE(...) etc
> in their own builds too.

Yeah.  I believe there was a kernel-bpf-oriented group last year, who
were speculating extending sdt.h in a similarly motivated way.


> The goal is to let you write
>
> probe process("myapp").mark("some__tracepoint")
> {
>     printf("hit some__tracepoint(%s, %d)\n",
>         user_string(useful_firstarg_name),
>         some_secondarg->somemember->somelongmember);
> }
> and display useful arg names and types in `stap -L` too.

Note that one point of the sdt.h structure was to make the executables
self-sufficient with respect to extracting this data, even if there is
no debuginfo available.  Adding type names can only work if that
debuginfo is available after all, or else if it's synthetically
generated via @cast("<foo.h>") type constructs.


> Saving the argument names looks relatively simple in most cases. Define an
> additional set of macros in the usual STAP_PROBE2() etc style like the
> following pseudoishcode:
>
>     STAP_PROBE2_ARGAMES(provider, probename, argname1, argname2) \
>         const char "__stap_argnames_" ## provider ## "_" ## probename ##
> [2][] \
>               = { #argname1, #argname2 } \
>         __attribute__ ((unused)) \
>         __attribute__ ((section (".probes")));
>
> i.e generate some constant data with the probe names in a global array we
> can look up when compiling the tapscript based on the provider and probe
> name.

Yeah, that's a sensible way of doing it, without creating a new note
format or anything.  It's important that the section be marked with
attributes that will force it to be pulled into the main executable
via the usual linker scripts.

> [...]
> So my hope is it'll be possible to write
>
>     STAP_PROBE2(myprovider, myprobe, thing->foo, "foo", get_something(),
> "something");
>
> and have stap record the supplied argnames and infer the typeinfo then
> record that too, so it can look it up during tapscript translation.

(FWIW, I wouldn't consider it a failure if the typeinfo has to be
manually added.)


- FChE

  parent reply	other threads:[~2020-01-09 18:46 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-19  3:00 Craig Ringer
2019-12-20  4:13 ` Craig Ringer
2020-01-09 18:46 ` Frank Ch. Eigler [this message]
2020-01-13  5:28   ` Craig Ringer
2020-01-13 20:54     ` Frank Ch. Eigler
2020-01-13 21:08       ` Frank Ch. Eigler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87blrcfox9.fsf@redhat.com \
    --to=fche@redhat.com \
    --cc=craig@2ndquadrant.com \
    --cc=systemtap@sourceware.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).