From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 6991 invoked by alias); 13 May 2006 02:51:33 -0000 Received: (qmail 6978 invoked by uid 9572); 13 May 2006 02:51:32 -0000 Date: Sat, 13 May 2006 02:51:00 -0000 Message-ID: <20060513025132.6976.qmail@sourceware.org> From: wcheng@sourceware.org To: cluster-cvs@sources.redhat.com Subject: cluster/gfs-kernel/src/gfs ops_file.c Mailing-List: contact cluster-cvs-help@sourceware.org; run by ezmlm Precedence: bulk List-Subscribe: List-Post: List-Help: , Sender: cluster-cvs-owner@sourceware.org X-SW-Source: 2006-q2/txt/msg00223.txt.bz2 List-Id: CVSROOT: /cvs/cluster Module name: cluster Branch: RHEL4 Changes by: wcheng@sourceware.org 2006-05-13 02:51:32 Modified files: gfs-kernel/src/gfs: ops_file.c Log message: Found a performance issue in gfs_fsync() implementation where GL_SYNC glock flag introduces repeated page writes and meta data flushes via customer benchmark. The upload patch: 1. Replace the shared lock with an exclusive lock. 2. Borrow linux VFS layer's generic_osync_inode() (used by O_SYNC code path) to flush the local in-core inode into the disk, instead of the original GFS inode_go_sync(). After the changes, the (application) bandwidth jumps from 240.94 KB/s to 2.67 MB/s, very close (and almost equal) to ext3's under lock_nolock mount option. For other details, check out bugzilla 190950. Patches: http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/gfs-kernel/src/gfs/ops_file.c.diff?cvsroot=cluster&only_with_tag=RHEL4&r1=1.16.2.9&r2=1.16.2.10