From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 10222 invoked by alias); 10 Mar 2006 09:32:03 -0000 Received: (qmail 10215 invoked by uid 22791); 10 Mar 2006 09:32:02 -0000 X-Spam-Status: No, hits=-2.5 required=5.0 tests=AWL,BAYES_00,SPF_HELO_PASS,SPF_PASS X-Spam-Check-By: sourceware.org Received: from mx1.redhat.com (HELO mx1.redhat.com) (66.187.233.31) by sourceware.org (qpsmtpd/0.31) with ESMTP; Fri, 10 Mar 2006 09:32:01 +0000 Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.12.11/8.12.11) with ESMTP id k2A9Vxqx025066 for ; Fri, 10 Mar 2006 04:31:59 -0500 Received: from pobox.corp.redhat.com (pobox.corp.redhat.com [172.16.52.156]) by int-mx1.corp.redhat.com (8.11.6/8.11.6) with ESMTP id k2A9Vm117896; Fri, 10 Mar 2006 04:31:48 -0500 Received: from vpn83-137.boston.redhat.com (vpn83-137.boston.redhat.com [172.16.83.137]) by pobox.corp.redhat.com (8.12.8/8.12.8) with ESMTP id k2A9Vl51013925; Fri, 10 Mar 2006 04:31:48 -0500 Subject: Re: tutorial draft checked in From: Martin Hunt To: "Frank Ch. Eigler" Cc: systemtap@sources.redhat.com In-Reply-To: <20060310005104.GF2632@redhat.com> References: <20060303175653.GE6873@redhat.com> <1141420594.3595.46.camel@dragon> <20060310005104.GF2632@redhat.com> Content-Type: text/plain Organization: Red Hat Inc. Date: Fri, 10 Mar 2006 09:32:00 -0000 Message-Id: <1141983106.3380.46.camel@dragon> Mime-Version: 1.0 X-Mailer: Evolution 2.2.3 (2.2.3-3.fc4) Content-Transfer-Encoding: 7bit X-IsSubscribed: yes Mailing-List: contact systemtap-help@sourceware.org; run by ezmlm Precedence: bulk List-Subscribe: List-Post: List-Help: , Sender: systemtap-owner@sourceware.org X-SW-Source: 2006-q1/txt/msg00756.txt.bz2 On Thu, 2006-03-09 at 19:51 -0500, Frank Ch. Eigler wrote: > Hi - > > hunt wrote: > > > >This operation is efficient (taking a shared lock) because the > > >aggregate values are kept separately on each processor, and are only > > >aggregated across processors on request. > > > > Surprised me. I checked and this accurately described the current > > implementation, but the shared lock is unnecessary and should probably > > not be mentioned. > > [...] > > This is the subject of bug #2224. The runtime is taking locks, and > the translator is also emitting locks. In my opinion, the runtime > should leave the maximum possible locking discretion to the > translator, since e.g. only the latter knows how to enforce locking > timeouts over contentious data. We have argued this again and again. I see no reason why you want the translator to be more complicated and slower. Surely we have better things to work on. For the specific case of pmaps I am sure I spent more time arguing about it than writing it. The disadvantages of what you want to do are 1. Reader locks are slow. They don't scale as well as per-cpu spinlocks. 2. The translator holds the lock during the whole probe vs the runtime which holds the lock as short a time as possible. 3. Having the translator handle low-level locking eliminates the possibility of switching the runtime to a more efficient lockless solution later. > Anyway, if the advantage of having unshared per-cpu locks for the <<< > case was large, the translator could adopt the technique just as > easily. Obviously not true. It is already done and works in the runtime pmap implementation. I ran a few benchmarks to demonstrate pmaps scalability and measure the additional overhead from the translator reader-writer locks. Regular maps probe TEST { syscalls[probefunc()]++ } Pmaps probe TEST { syscalls[probefunc()] <<< 1 } Running on a dual-processor hyperthreaded machine. I ran threads that were making syscalls as fast as possible. Results are Kprobes/sec 1 thread 4 threads Regular 340 500 Pmaps 340 940 Pmaps* 380 1040 Pmaps* is pmaps with the redundant reader-writer locks removed. Measured overhead of those locks is approximately 10% of the cpu time for this test case.