public inbox for systemtap@sourceware.org
 help / color / mirror / Atom feed
From: Mark Goodwin <mgoodwin@redhat.com>
To: "Frank Ch. Eigler" <fche@redhat.com>
Cc: pcp@oss.sgi.com, systemtap@sources.redhat.com
Subject: Re: [pcp] suitability of PCP for event tracing
Date: Wed, 01 Sep 2010 06:25:00 -0000	[thread overview]
Message-ID: <4C7DF1E8.70107@redhat.com> (raw)
In-Reply-To: <20100831194941.GC5762@redhat.com>

On 09/01/2010 05:49 AM, Frank Ch. Eigler wrote:
> Hi -
>
> Thanks, Nathan, Ken, Greg, Mark, for clarifying the status quo and
> some of the history.
>
> We understand that the two problem domains are traditionally handled
> with the event-tracing -vs- stats-monitoring distinction.  We're trying
> to see where best to focus efforts to make some small steps to bridge
> the two, where plenty of compromises are possible.  We'd prefer to
> help build on an existing project with a nice community than to do new
> stuff.

yes certainly :)

> For the poll-based data gathering issue, a couple of approaches came up:
>
> (1) bypassing pmcd and generating an pmarchive file directly from
>      trace data This appears to imply continuing the archive-vs-live
>      dichotomy that makes it difficult for clients to process both
>      recent and current data seamlessly together.

one of the issues with the live vrs archive dichotomy is that live
data is always available (since you're requesting it explicitly from
a PMDA that is otherwise passive), whereas the archive data is not
available unless configured to be collected before-hand (see pmlogger).
There is too much data to collect everything all the time - it's too
impracticable and intrusive, so some form of filtering and/or aggregation
needs to be done (see pmlogsummary, and Greg's old project too).

> Since using such
>      files would probably also need a custom client, then we'd not be
>      using much of the pcp infrastructure, only as a passive data
>      encoding layer.  This may not be worthwhile.
>
> (2) protocol extensions for live-push on pmda and pmcd-client interfaces
>      This clearly larger effort is only worth undertaking with the
>      community's sympathy and assistance.  It might have some
>      interesting integration possibilities with the other tools,
>      espectially pmie (the inference engine).

yep - I suspect Ken and maybe Nathan would have further comments on this

>
> For the static-pmns issue, the possibility of dynamic instance
> domains, metric subspaces is probably sufficient, if the event
> parameters are limited to only 1-2 degrees of freedom.  (In contrast,
> imagine browsing a trace of NFS or kernel VFS operations; these have
> ~5 parameters.)

PCP instance domains are traditionally single dimensional, though there
are a few exceptions such as kernel.percpu.interrupts. It's easy enough
to split multi-dimensional data structures out into multiple metrics with
a common instance domain.

> For the scalar-payloads issue, the BLOB/STRING metric types are indeed
> available but are opaque to other tools, so don't compose well.  Would
> you consider one additional data type, something like a JSON[1]
> string?  It would be self-describing, with pmie and general processing
> opportunities, though those numbers would lack the PMDA_PMUNITS
> dimensioning.

this could work using string or binary blob data types in the
existing protocols - though there is a size limit. And one of
the blessed features of PCP is the client monitoring tools can
more or less monitor any metrics - so any solution here would
also need specially crafted client tools. Extensions to the perl
binding would probably work best, e.g. interface with perl-JASON-*

> For the filtering issue, pmStore() is an interesting possibility,
> allowing the PMDAs to bear the brunt.  OTOH, if pmcd evolved into a
> data-push-capable widget, it could serve as a filtering proxy,
> requiring separate API or interpretation of the pmStore data.

well pmcd is already data-push capable using the pmstore interface,
allowing clients to store values for certain metrics in some of
the PMDAs. Filtering and parsing is done by the PMDA itself and
pmcd just acts as a proxy passthru (kind of a back-channel to
the pull interface).

pmstore hasn't really been used in anger like this though - more just
for setting config & control options and the like. The same (or similar)
protocol has also been used for a data source to open a socket directly
to a PMDA and tie into the PMDA's select loop, rather than going via pmcd.

>
> For the web-based frontend issue, yeah, javascript+svg+etc. sounds
> most promising, especially if it can be made to speak the native wire
> protocol to pmdc.  This would seem to argue for a stateful
> archive-serving pmdc, or perhaps a archive-serving proxy, as in Greg's
> old project.

Time averaging, aggregation and filtering were all ambitious aims
of the project Greg's talking about - I wonder if that code could
ever be resurrected and open sourced? One abomination here was
that a PMDA could also be a client - and potentially query itself
for metrics(!)

> Is this sounding reasonable?
>

it's going to take a lot more discussion, but enthusiasm seems to
be on our side :)

Cheers
-- Mark Goodwin

  reply	other threads:[~2010-09-01  6:25 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-08-27 15:39 Frank Ch. Eigler
2010-08-29 15:55 ` [pcp] " Ken McDonell
2010-09-01 15:05   ` David Smith
2010-09-06 16:39     ` Ken McDonell
     [not found] ` <4C7A7DFE.2040606@internode.on.net>
2010-08-31  3:29   ` Greg Banks
2010-08-31 19:49   ` Frank Ch. Eigler
2010-09-01  6:25     ` Mark Goodwin [this message]
2010-09-02  2:05       ` Greg Banks
2010-09-02 19:40         ` Frank Ch. Eigler
2010-09-12 16:43     ` Ken McDonell
2010-09-13  2:21       ` Greg Banks
2010-09-13 13:29       ` Max Matveev
2010-09-13 20:53         ` Ken McDonell
2010-09-13 20:39       ` Frank Ch. Eigler
     [not found] <1010363924.405041282969375594.JavaMail.root@mail-au.aconex.com>
2010-08-28  4:24 ` nathans
     [not found] <1459138113.589721283398633993.JavaMail.root@mail-au.aconex.com>
2010-09-02  3:42 ` nathans
2010-09-02  4:11   ` Greg Banks
     [not found] <1780385660.592861283401180077.JavaMail.root@mail-au.aconex.com>
2010-09-02  4:22 ` nathans
2010-09-02  4:30   ` Greg Banks
     [not found] <1740408065.702181283817937875.JavaMail.root@mail-au.aconex.com>
2010-09-07  0:09 ` nathans
     [not found] <105152664.981101284508372475.JavaMail.root@mail-au.aconex.com>
2010-09-15  3:12 ` Greg Banks
2010-09-15 12:11   ` Frank Ch. Eigler
2010-09-16  0:21     ` Greg Banks
2010-09-16  1:04       ` Frank Ch. Eigler
2010-09-16  2:07         ` Greg Banks
2010-09-16 12:40           ` Ken McDonell
2010-09-16 14:24             ` Frank Ch. Eigler
2010-09-16 15:53               ` Ken McDonell
2010-09-23 22:15                 ` Frank Ch. Eigler
2010-10-11  8:02                   ` Ken McDonell
2010-10-11 12:34                     ` Nathan Scott
2010-10-12 20:37                       ` Ken McDonell
2010-11-10  0:43                     ` Ken McDonell
     [not found] <1341556404.1064361284677032819.JavaMail.root@mail-au.aconex.com>
2010-09-16 23:18 ` nathans
2010-09-18 14:21   ` Ken McDonell
2010-09-19  9:28     ` Max Matveev
2010-09-19  9:49       ` Nathan Scott
     [not found] <2010549822.1115071284891293369.JavaMail.root@mail-au.aconex.com>
2010-09-19 10:19 ` nathans
     [not found] <1362202390.1923851286924784463.JavaMail.root@mail-au.aconex.com>
2010-10-12 23:07 ` nathans
     [not found] <1565492777.26861289432902163.JavaMail.root@acxmail-au2.aconex.com>
2010-11-10 23:49 ` nathans
2010-11-11  1:46   ` Max Matveev
2010-11-23 20:48   ` Ken McDonell
     [not found] <290445718.141091290639324786.JavaMail.root@acxmail-au2.aconex.com>
2010-11-24 22:58 ` nathans
2010-11-27  5:29   ` Ken McDonell
2010-11-28 19:08     ` Ken McDonell
     [not found] <1949991220.207641291354253203.JavaMail.root@acxmail-au2.aconex.com>
2010-12-03  5:32 ` nathans
2010-12-03  6:08   ` Ken McDonell
     [not found] <534400126.208681291368854553.JavaMail.root@acxmail-au2.aconex.com>
2010-12-03  9:40 ` nathans
2010-12-03 11:13   ` Ken McDonell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4C7DF1E8.70107@redhat.com \
    --to=mgoodwin@redhat.com \
    --cc=fche@redhat.com \
    --cc=pcp@oss.sgi.com \
    --cc=systemtap@sources.redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).