From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 17115 invoked by alias); 8 Mar 2006 20:47:10 -0000 Received: (qmail 17054 invoked by uid 9475); 8 Mar 2006 20:47:09 -0000 Date: Wed, 08 Mar 2006 20:47:00 -0000 Message-ID: <20060308204709.17052.qmail@sourceware.org> From: bmarzins@sourceware.org To: cluster-cvs@sources.redhat.com Subject: cluster/gfs-kernel/src/gfs ops_inode.c Mailing-List: contact cluster-cvs-help@sourceware.org; run by ezmlm Precedence: bulk List-Subscribe: List-Post: List-Help: , Sender: cluster-cvs-owner@sourceware.org X-SW-Source: 2006-q1/txt/msg00340.txt.bz2 List-Id: CVSROOT: /cvs/cluster Module name: cluster Branch: STABLE Changes by: bmarzins@sourceware.org 2006-03-08 20:47:09 Modified files: gfs-kernel/src/gfs: ops_inode.c Log message: Really gross hack!!! This is a workaround for one of the bugs the got lumped into 166701. It breaks POSIX behavior in a corner case to avoid crashing... It's icky. when NFS opens a file with O_CREAT, the kernel nfs daemon checks to see if the file exists. If it does, nfsd does the *right thing* (either opens the file, or if the file was opened with O_EXCL, returns an error). If the file doesn't exist, it passes the request down to the underlying file system. Unfortunately, since nfs *knows* that the file doesn't exist, it doesn't bother to pass a nameidata structure, which would include the intent information. However since gfs is a cluster file system, the file could have been created on another node after nfs checks for it. If this is the case, gfs needs the intent information to do the *right thing*. It panics when it finds a NULL pointer, instead of the nameidata. Now, instead of panicing, if gfs finds a NULL nameidata pointer. It assumes that the file was not created with O_EXCL. This assumption could be wrong, with the result that an application could thing that it has created a new file, when in fact, it has opened an existing one. Patches: http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/gfs-kernel/src/gfs/ops_inode.c.diff?cvsroot=cluster&only_with_tag=STABLE&r1=1.6.6.1.2.2&r2=1.6.6.1.2.3