From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 8272 invoked by alias); 14 Nov 2006 20:33:35 -0000 Received: (qmail 8241 invoked by uid 9453); 14 Nov 2006 20:33:33 -0000 Date: Tue, 14 Nov 2006 20:33:00 -0000 Message-ID: <20061114203333.8240.qmail@sourceware.org> From: teigland@sourceware.org To: cluster-cvs@sources.redhat.com Subject: cluster/group/gfs_controld main.c plock.c Mailing-List: contact cluster-cvs-help@sourceware.org; run by ezmlm Precedence: bulk List-Subscribe: List-Post: List-Help: , Sender: cluster-cvs-owner@sourceware.org X-SW-Source: 2006-q4/txt/msg00414.txt.bz2 List-Id: CVSROOT: /cvs/cluster Module name: cluster Branch: RHEL5 Changes by: teigland@sourceware.org 2006-11-14 20:33:33 Modified files: group/gfs_controld: main.c plock.c Log message: Add plock rate limit option -l . Current default is no limit (0). If a limit is set, gfs_controld will send no more than plock operations (multicast messages) every second. Given a limit of 10, one file system where plocks are used, and a program that does a tight loop of fcntl lock/unlock operations, the max number of loop iterations in 1 second would be 5. If eight nodes were all doing this there would be 80 total network multicasts every second from all nodes in the cluster. We also record the volume of plock messages accepted locally and received from the network in the debug log. A log entry is written for every 1000 locally accepted plock operations and for every 1000 operations received from the network. Patches: http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/gfs_controld/main.c.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.18&r2=1.18.2.1 http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/gfs_controld/plock.c.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.25&r2=1.25.2.1