public inbox for cluster-cvs@sourceware.org
help / color / mirror / Atom feed
* cluster: RHEL47 - NFS over GFS issue (fatal: assertion "!bd->bd_pinned && !buffer_busy(bh)" failed)
@ 2009-03-24 18:54 Bob Peterson
  0 siblings, 0 replies; only message in thread
From: Bob Peterson @ 2009-03-24 18:54 UTC (permalink / raw)
  To: cluster-cvs-relay

Gitweb:        http://git.fedorahosted.org/git/cluster.git?p=cluster.git;a=commitdiff;h=e8dad1b073b1d40fe0beeb7ecae751175807a396
Commit:        e8dad1b073b1d40fe0beeb7ecae751175807a396
Parent:        44784c3ed5cf46981b2d03c2a536e2547f3df274
Author:        Bob Peterson <rpeterso@redhat.com>
AuthorDate:    Tue Mar 17 11:04:53 2009 -0500
Committer:     Bob Peterson <rpeterso@redhat.com>
CommitterDate: Tue Mar 24 13:51:10 2009 -0500

NFS over GFS issue (fatal: assertion "!bd->bd_pinned && !buffer_busy(bh)" failed)

bz 455696

There were several places in the code that tried to remove buffers
from the two active items lists (transaction and glock ail) but each
one was using a different set of criteria, none of which were
consistent.  This patch introduces a new function,
gfs_bd_ail_tryremove, that checks the condition of the buffer using
uniform criteria before removing it from the ail.  All the places
that were doing this haphazardly now call the new function.

So really, not counting the scsi device driver bug, there were five
main gfs bugs found with this one scenario:

1. There was a timing window whereby a process could mark a buffer
   dirty while it was being synced to disk.  This was fixed by
   introducing a new semaphore, sd_log_flush_lock, which keeps
   that from happening.
2. Buffers were being taken off the ail list at different times
   using different criteria.  That was fixed by the new function
   as mentioned above.
3. Some buffers were not being added to the transaction, especially
   in cases where the files were journaled, which happens mostly
   with directory hash table data.  That was fixed by the
   introduction of the necessary calls to gfs_trans_add_bh.
4. The transaction glock was prematurely being released when the
   glocks hit a capacity watermark.  That's why it often took so
   long to recreate some of these problems.  To prevent this, new
   code was added to function clear_glock so that it would only
   release the transaction glock at unmount time.  I'm using the
   number of glockd daemons to determine whether the call was made
   during normal operations or at unmount time.  So in order to
   accomodate that change, I had to fix a bit of code where the
   number of glockd daemons was going negative.
5. When finding its place in the journal, function gfs_find_jhead
   was not holding the journal log lock.  That causing another
   nasty timing window where the journal was being changed while
   the proper journal location was being located.
---
 gfs-kernel/src/gfs/dio.c         |  140 +++-----------------------------------
 gfs-kernel/src/gfs/dio.h         |   27 +++++++-
 gfs-kernel/src/gfs/dir.c         |   15 +++-
 gfs-kernel/src/gfs/file.c        |    2 +
 gfs-kernel/src/gfs/glock.c       |    6 ++-
 gfs-kernel/src/gfs/glops.c       |   12 +--
 gfs-kernel/src/gfs/incore.h      |    2 +
 gfs-kernel/src/gfs/log.c         |   81 ++++++++++++++++++++++-
 gfs-kernel/src/gfs/log.h         |    2 +-
 gfs-kernel/src/gfs/ops_address.c |    4 +-
 gfs-kernel/src/gfs/ops_file.c    |    6 +-
 gfs-kernel/src/gfs/ops_fstype.c  |    5 +-
 gfs-kernel/src/gfs/ops_super.c   |    6 +-
 gfs-kernel/src/gfs/quota.c       |    2 +-
 gfs-kernel/src/gfs/recovery.c    |    4 +
 gfs-kernel/src/gfs/trans.c       |    9 +--
 16 files changed, 165 insertions(+), 158 deletions(-)

diff --git a/gfs-kernel/src/gfs/dio.c b/gfs-kernel/src/gfs/dio.c
index 659ffab..413ee8f 100644
--- a/gfs-kernel/src/gfs/dio.c
+++ b/gfs-kernel/src/gfs/dio.c
@@ -32,8 +32,6 @@
 #include "rgrp.h"
 #include "trans.h"
 
-#define buffer_busy(bh) ((bh)->b_state & ((1ul << BH_Dirty) | (1ul << BH_Lock)))
-
 /**
  * aspace_get_block - 
  * @inode:
@@ -161,19 +159,14 @@ gfs_aspace_releasepage(struct page *page, int gfp_mask)
 		t = jiffies;
 
 		while (atomic_read(&bh->b_count)) {
-			if (atomic_read(&aspace->i_writecount)) {
-				if (time_after_eq(jiffies,
-						  t +
-						  gfs_tune_get(sdp, gt_stall_secs) * HZ)) {
-					stuck_releasepage(bh);
-					t = jiffies;
-				}
-
-				yield();
-				continue;
+			if (time_after_eq(jiffies, t +
+					  gfs_tune_get(sdp,
+						       gt_stall_secs) * HZ)) {
+				stuck_releasepage(bh);
+				t = jiffies;
 			}
 
-			return 0;
+			yield();
 		}
 
 		bd = bh2bd(bh);
@@ -274,23 +267,11 @@ gfs_ail_start_trans(struct gfs_sbd *sdp, struct gfs_trans *tr)
 			if (gfs_trylock_buffer(bh))
 				continue;
 
-			if (bd->bd_pinned) {
+			if (bd->bd_pinned || gfs_bd_ail_tryremove(sdp, bd)) {
 				gfs_unlock_buffer(bh);
 				continue;
 			}
 
-			if (!buffer_busy(bh)) {
-				if (!buffer_uptodate(bh))
-					gfs_io_error_bh(sdp, bh);
-
-				list_del_init(&bd->bd_ail_tr_list);
-				list_del(&bd->bd_ail_gl_list);
-
-				gfs_unlock_buffer(bh);
-				brelse(bh);
-				continue;
-			}
-
 			if (buffer_dirty(bh)) {
 				list_move(&bd->bd_ail_tr_list, head);
 
@@ -334,22 +315,10 @@ gfs_ail_empty_trans(struct gfs_sbd *sdp, struct gfs_trans *tr)
 		bd = list_entry(tmp, struct gfs_bufdata, bd_ail_tr_list);
 		bh = bd->bd_bh;
 
-		if (gfs_trylock_buffer(bh))
-			continue;
-
-		if (bd->bd_pinned || buffer_busy(bh)) {
+		if (gfs_trylock_buffer(bh) == 0) {
+			gfs_bd_ail_tryremove(sdp, bd);
 			gfs_unlock_buffer(bh);
-			continue;
 		}
-
-		if (!buffer_uptodate(bh))
-			gfs_io_error_bh(sdp, bh);
-
-		list_del_init(&bd->bd_ail_tr_list);
-		list_del(&bd->bd_ail_gl_list);
-
-		gfs_unlock_buffer(bh);
-		brelse(bh);
 	}
 
 	ret = list_empty(head);
@@ -360,85 +329,6 @@ gfs_ail_empty_trans(struct gfs_sbd *sdp, struct gfs_trans *tr)
 }
 
 /**
- * ail_empty_gl - remove all buffers for a given lock from the AIL
- * @gl: the glock
- *
- * None of the buffers should be dirty, locked, or pinned.
- */
-
-static void
-ail_empty_gl(struct gfs_glock *gl)
-{
-	struct gfs_sbd *sdp = gl->gl_sbd;
-	struct gfs_bufdata *bd;
-	struct buffer_head *bh;
-
-	spin_lock(&sdp->sd_ail_lock);
-
-	while (!list_empty(&gl->gl_ail_bufs)) {
-		bd = list_entry(gl->gl_ail_bufs.next,
-				struct gfs_bufdata, bd_ail_gl_list);
-		bh = bd->bd_bh;
-
-		gfs_assert_withdraw(sdp, !bd->bd_pinned && !buffer_busy(bh));
-		if (!buffer_uptodate(bh))
-			gfs_io_error_bh(sdp, bh);
-
-		list_del_init(&bd->bd_ail_tr_list);
-		list_del(&bd->bd_ail_gl_list);
-
-		brelse(bh);
-	}
-
-	spin_unlock(&sdp->sd_ail_lock);
-}
-
-/**
- * gfs_inval_buf - Invalidate all buffers associated with a glock
- * @gl: the glock
- *
- */
-
-void
-gfs_inval_buf(struct gfs_glock *gl)
-{
-	struct inode *aspace = gl->gl_aspace;
-	struct address_space *mapping = gl->gl_aspace->i_mapping;
-
-	ail_empty_gl(gl);
-
-	atomic_inc(&aspace->i_writecount);
-	truncate_inode_pages(mapping, 0);
-	atomic_dec(&aspace->i_writecount);
-
-	gfs_assert_withdraw(gl->gl_sbd, !mapping->nrpages);
-}
-
-/**
- * gfs_sync_buf - Sync all buffers associated with a glock
- * @gl: The glock
- * @flags: DIO_START | DIO_WAIT | DIO_CHECK
- *
- */
-
-void
-gfs_sync_buf(struct gfs_glock *gl, int flags)
-{
-	struct address_space *mapping = gl->gl_aspace->i_mapping;
-	int error = 0;
-
-	if (flags & DIO_START)
-		error = filemap_fdatawrite(mapping);
-	if (!error && (flags & DIO_WAIT))
-		error = filemap_fdatawait(mapping);
-	if (!error && (flags & (DIO_INVISIBLE | DIO_CHECK)) == DIO_CHECK)
-		ail_empty_gl(gl);
-
-	if (error)
-		gfs_io_error(gl->gl_sbd);
-}
-
-/**
  * getbuf - Get a buffer with a given address space
  * @sdp: the filesystem
  * @aspace: the address space
@@ -731,11 +621,7 @@ gfs_dpin(struct gfs_sbd *sdp, struct buffer_head *bh)
 		   to in-place disk block, remove it from the AIL. */
 
 		spin_lock(&sdp->sd_ail_lock);
-		if (!list_empty(&bd->bd_ail_tr_list) && !buffer_busy(bh)) {
-			list_del_init(&bd->bd_ail_tr_list);
-			list_del(&bd->bd_ail_gl_list);
-			brelse(bh);
-		}
+		gfs_bd_ail_tryremove(sdp, bd);
 		spin_unlock(&sdp->sd_ail_lock);
 
 		clear_buffer_dirty(bh);
@@ -1079,11 +965,7 @@ gfs_wipe_buffers(struct gfs_inode *ip, struct gfs_rgrpd *rgd,
 						add = TRUE;
 					else {
 						spin_lock(&sdp->sd_ail_lock);
-						if (!list_empty(&bd->bd_ail_tr_list)) {
-							list_del_init(&bd->bd_ail_tr_list);
-							list_del(&bd->bd_ail_gl_list);
-							brelse(bh);
-						}
+						gfs_bd_ail_tryremove(sdp, bd);
 						spin_unlock(&sdp->sd_ail_lock);
 					}
 				} else {
diff --git a/gfs-kernel/src/gfs/dio.h b/gfs-kernel/src/gfs/dio.h
index 8de515e..3836d3a 100644
--- a/gfs-kernel/src/gfs/dio.h
+++ b/gfs-kernel/src/gfs/dio.h
@@ -14,6 +14,8 @@
 #ifndef __DIO_DOT_H__
 #define __DIO_DOT_H__
 
+#define buffer_busy(bh) ((bh)->b_state & ((1ul << BH_Dirty) | (1ul << BH_Lock)))
+
 void gfs_ail_start_trans(struct gfs_sbd *sdp, struct gfs_trans *tr);
 int gfs_ail_empty_trans(struct gfs_sbd *sdp, struct gfs_trans *tr);
 
@@ -85,7 +87,6 @@ struct inode *gfs_aspace_get(struct gfs_sbd *sdp);
 void gfs_aspace_put(struct inode *aspace);
 
 void gfs_inval_buf(struct gfs_glock *gl);
-void gfs_sync_buf(struct gfs_glock *gl, int flags);
 
 void gfs_flush_meta_cache(struct gfs_inode *ip);
 
@@ -167,4 +168,28 @@ gfs_buffer_copy_tail(struct buffer_head *to_bh, int to_head,
 	       from_head - to_head);
 }
 
+/*
+ * gfs_bd_ail_tryremove - try to remove a bd from the ail
+ * returns: 1 if it removed one, else 0
+ */
+static __inline__ int
+gfs_bd_ail_tryremove(struct gfs_sbd *sdp, struct gfs_bufdata *bd)
+{
+	struct buffer_head *bh;
+
+	bh = bd->bd_bh;
+	if (!bd->bd_pinned && !list_empty(&bd->bd_ail_tr_list) &&
+	    !buffer_busy(bh)) {
+		if (!buffer_uptodate(bh))
+			gfs_io_error_bh(sdp, bh);
+
+		list_del_init(&bd->bd_ail_tr_list);
+		list_del(&bd->bd_ail_gl_list);
+
+		brelse(bh);
+		return 1;
+	}
+	return 0;
+}
+
 #endif /* __DIO_DOT_H__ */
diff --git a/gfs-kernel/src/gfs/dir.c b/gfs-kernel/src/gfs/dir.c
index 741ea33..b53da73 100644
--- a/gfs-kernel/src/gfs/dir.c
+++ b/gfs-kernel/src/gfs/dir.c
@@ -811,7 +811,9 @@ dir_split_leaf(struct gfs_inode *dip, uint32_t index, uint64_t leaf_no)
 		    gfs32_to_cpu(dent->de_hash) < divider) {
 			name_len = gfs16_to_cpu(dent->de_name_len);
 
-			gfs_dirent_alloc(dip, nbh, name_len, &new);
+			error = gfs_dirent_alloc(dip, nbh, name_len, &new);
+			if (error)
+				goto fail_brelse;
 
 			new->de_inum = dent->de_inum; /* No endianness worries */
 			new->de_hash = dent->de_hash; /* No endianness worries */
@@ -844,7 +846,9 @@ dir_split_leaf(struct gfs_inode *dip, uint32_t index, uint64_t leaf_no)
 	   artificially fill in the first entry. */
 
 	if (!moved) {
-		gfs_dirent_alloc(dip, nbh, 0, &new);
+		error = gfs_dirent_alloc(dip, nbh, 0, &new);
+		if (error)
+			goto fail_brelse;
 		new->de_inum.no_formal_ino = 0;
 	}
 
@@ -854,6 +858,7 @@ dir_split_leaf(struct gfs_inode *dip, uint32_t index, uint64_t leaf_no)
 
 	error = gfs_get_inode_buffer(dip, &dibh);
 	if (!gfs_assert_withdraw(dip->i_sbd, !error)) {
+		gfs_trans_add_bh(dip->i_gl, dibh);
 		dip->i_di.di_blocks++;
 		gfs_dinode_out(&dip->i_di, dibh->b_data);
 		brelse(dibh);
@@ -939,6 +944,7 @@ dir_double_exhash(struct gfs_inode *dip)
 
 	error = gfs_get_inode_buffer(dip, &dibh);
 	if (!gfs_assert_withdraw(sdp, !error)) {
+		gfs_trans_add_bh(dip->i_gl, dibh);
 		dip->i_di.di_depth++;
 		gfs_dinode_out(&dip->i_di, dibh->b_data);
 		brelse(dibh);
@@ -1464,7 +1470,9 @@ dir_e_add(struct gfs_inode *dip, struct qstr *filename,
 				nleaf->lf_depth = leaf->lf_depth;
 				nleaf->lf_dirent_format = cpu_to_gfs32(GFS_FORMAT_DE);
 
-				gfs_dirent_alloc(dip, nbh, filename->len, &dent);
+				error = gfs_dirent_alloc(dip, nbh, filename->len, &dent);
+				if (error)
+					return error;
 
 				dip->i_di.di_blocks++;
 
@@ -1755,6 +1763,7 @@ dir_l_add(struct gfs_inode *dip, struct qstr *filename,
 	if (error)
 		return error;
 
+	gfs_trans_add_bh(dip->i_gl, dibh);
 	if (gfs_dirent_alloc(dip, dibh, filename->len, &dent)) {
 		brelse(dibh);
 
diff --git a/gfs-kernel/src/gfs/file.c b/gfs-kernel/src/gfs/file.c
index ac6deed..baec849 100644
--- a/gfs-kernel/src/gfs/file.c
+++ b/gfs-kernel/src/gfs/file.c
@@ -371,6 +371,8 @@ gfs_writei(struct gfs_inode *ip, void *buf,
 		if (error)
 			goto fail;
 
+		if (journaled)
+			gfs_trans_add_bh(ip->i_gl, bh);
 		error = copy_fn(ip, bh, &buf, o, amount, new);
 		brelse(bh);
 		if (error)
diff --git a/gfs-kernel/src/gfs/glock.c b/gfs-kernel/src/gfs/glock.c
index 1014ea2..119a9dd 100644
--- a/gfs-kernel/src/gfs/glock.c
+++ b/gfs-kernel/src/gfs/glock.c
@@ -1607,7 +1607,6 @@ gfs_glock_dq(struct gfs_holder *gh)
 	struct gfs_glock *gl = gh->gh_gl;
 	struct gfs_sbd *sdp = gl->gl_sbd;
 	struct gfs_glock_operations *glops = gl->gl_ops;
-	struct list_head *pos;
 
 	atomic_inc(&gl->gl_sbd->sd_glock_dq_calls);
 
@@ -2666,6 +2665,11 @@ clear_glock(struct gfs_glock *gl, unsigned int *unused)
 	struct gfs_sbd *sdp = gl->gl_sbd;
 	struct gfs_gl_hash_bucket *bucket = gl->gl_bucket;
 
+	/* If this isn't shutdown, keep the transaction glock around. */
+	if (sdp->sd_glockd_num && gl == sdp->sd_trans_gl) {
+		glock_put(gl);   /* see examine_bucket() */
+		return;
+	}
 	spin_lock(&sdp->sd_reclaim_lock);
 	if (!list_empty(&gl->gl_reclaim)) {
 		list_del_init(&gl->gl_reclaim);
diff --git a/gfs-kernel/src/gfs/glops.c b/gfs-kernel/src/gfs/glops.c
index f6dbcf8..6b7218a 100644
--- a/gfs-kernel/src/gfs/glops.c
+++ b/gfs-kernel/src/gfs/glops.c
@@ -51,10 +51,8 @@ meta_go_sync(struct gfs_glock *gl, int flags)
 	if (!(flags & DIO_METADATA))
 		return;
 
-	if (test_bit(GLF_DIRTY, &gl->gl_flags)) {
-		gfs_log_flush_glock(gl);
-		gfs_sync_buf(gl, flags | DIO_START | DIO_WAIT | DIO_CHECK);
-	}
+	if (test_bit(GLF_DIRTY, &gl->gl_flags))
+		gfs_log_flush_glock(gl, flags);
 
 	/* We've synced everything, clear SYNC request and DIRTY flags */
 	clear_bit(GLF_DIRTY, &gl->gl_flags);
@@ -234,12 +232,10 @@ inode_go_sync(struct gfs_glock *gl, int flags)
 	if (test_bit(GLF_DIRTY, &gl->gl_flags)) {
 		if (meta && data) {
 			gfs_sync_page(gl, flags | DIO_START);
-			gfs_log_flush_glock(gl);
-			gfs_sync_buf(gl, flags | DIO_START | DIO_WAIT | DIO_CHECK);
+			gfs_log_flush_glock(gl, flags);
 			gfs_sync_page(gl, flags | DIO_WAIT | DIO_CHECK);
 		} else if (meta) {
-			gfs_log_flush_glock(gl);
-			gfs_sync_buf(gl, flags | DIO_START | DIO_WAIT | DIO_CHECK);
+			gfs_log_flush_glock(gl, flags);
 		} else if (data)
 			gfs_sync_page(gl, flags | DIO_START | DIO_WAIT | DIO_CHECK);
 	}
diff --git a/gfs-kernel/src/gfs/incore.h b/gfs-kernel/src/gfs/incore.h
index c2409e7..18291fe 100644
--- a/gfs-kernel/src/gfs/incore.h
+++ b/gfs-kernel/src/gfs/incore.h
@@ -1128,6 +1128,8 @@ struct gfs_sbd {
 	unsigned int sd_log_buffers;	/* # of buffers in the incore log */
 
 	struct rw_semaphore sd_log_lock;	/* Lock for access to log values */
+	struct semaphore sd_log_flush_lock; /* Lock for function
+						  log_flush_internal */
 
 	uint64_t sd_log_dump_last;
 	uint64_t sd_log_dump_last_wrap;
diff --git a/gfs-kernel/src/gfs/log.c b/gfs-kernel/src/gfs/log.c
index 1e6588e..cbcd02b 100644
--- a/gfs-kernel/src/gfs/log.c
+++ b/gfs-kernel/src/gfs/log.c
@@ -38,9 +38,11 @@
 #include <asm/semaphore.h>
 #include <linux/completion.h>
 #include <linux/buffer_head.h>
+#include <linux/mm.h>
 
 #include "gfs.h"
 #include "dio.h"
+#include "glock.h"
 #include "log.h"
 #include "lops.h"
 
@@ -1045,7 +1047,78 @@ log_flush_internal(struct gfs_sbd *sdp, struct gfs_glock *gl)
 void
 gfs_log_flush(struct gfs_sbd *sdp)
 {
+	down(&sdp->sd_log_flush_lock); /* unlocked in gfs_sync_buf */
 	log_flush_internal(sdp, NULL);
+	up(&sdp->sd_log_flush_lock); /* locked in log_flush_internal */
+}
+
+/**
+ * ail_empty_gl - remove all buffers for a given lock from the AIL
+ * @gl: the glock
+ *
+ * None of the buffers should be dirty, locked, or pinned.
+ */
+
+static void
+ail_empty_gl(struct gfs_glock *gl)
+{
+	struct gfs_sbd *sdp = gl->gl_sbd;
+	struct gfs_bufdata *bd;
+	struct list_head *pos, *tmp;
+
+	spin_lock(&sdp->sd_ail_lock);
+
+	list_for_each_safe(pos, tmp, &gl->gl_ail_bufs) {
+		bd = list_entry(pos, struct gfs_bufdata, bd_ail_gl_list);
+		gfs_bd_ail_tryremove(sdp, bd);
+	}
+
+	spin_unlock(&sdp->sd_ail_lock);
+}
+
+/**
+ * gfs_inval_buf - Invalidate all buffers associated with a glock
+ * @gl: the glock
+ *
+ */
+
+void
+gfs_inval_buf(struct gfs_glock *gl)
+{
+	struct address_space *mapping = gl->gl_aspace->i_mapping;
+
+	/*down(&gl->gl_sbd->sd_log_flush_lock);*/
+	ail_empty_gl(gl);
+
+	truncate_inode_pages(mapping, 0);
+
+	gfs_assert_withdraw(gl->gl_sbd, !mapping->nrpages);
+	/*up(&gl->gl_sbd->sd_log_flush_lock);*/
+}
+
+/**
+ * gfs_sync_buf - Sync all buffers associated with a glock
+ * @gl: The glock
+ * @flags: DIO_START | DIO_WAIT | DIO_CHECK
+ *
+ */
+
+static void
+gfs_sync_buf(struct gfs_glock *gl, int flags)
+{
+	struct address_space *mapping = gl->gl_aspace->i_mapping;
+	int error = 0;
+
+	error = filemap_fdatawrite(mapping);
+	if (!error)
+		error = filemap_fdatawait(mapping);
+	if (!error) {
+		if (!(flags & DIO_INVISIBLE))
+			ail_empty_gl(gl);
+	}
+	if (error)
+		gfs_io_error(gl->gl_sbd);
+
 }
 
 /**
@@ -1055,9 +1128,15 @@ gfs_log_flush(struct gfs_sbd *sdp)
  */
 
 void
-gfs_log_flush_glock(struct gfs_glock *gl)
+gfs_log_flush_glock(struct gfs_glock *gl, int flags)
 {
+	struct gfs_sbd *sdp = gl->gl_sbd;
+
+	down(&sdp->sd_log_flush_lock);
 	log_flush_internal(gl->gl_sbd, gl);
+	if (flags)
+		gfs_sync_buf(gl, flags | DIO_CHECK);
+	up(&sdp->sd_log_flush_lock);
 }
 
 /**
diff --git a/gfs-kernel/src/gfs/log.h b/gfs-kernel/src/gfs/log.h
index 3853903..c4a2895 100644
--- a/gfs-kernel/src/gfs/log.h
+++ b/gfs-kernel/src/gfs/log.h
@@ -50,7 +50,7 @@ int gfs_ail_empty(struct gfs_sbd *sdp);
 
 void gfs_log_commit(struct gfs_sbd *sdp, struct gfs_trans *trans);
 void gfs_log_flush(struct gfs_sbd *sdp);
-void gfs_log_flush_glock(struct gfs_glock *gl);
+void gfs_log_flush_glock(struct gfs_glock *gl, int flags);
 
 void gfs_log_shutdown(struct gfs_sbd *sdp);
 
diff --git a/gfs-kernel/src/gfs/ops_address.c b/gfs-kernel/src/gfs/ops_address.c
index fb17133..6661421 100644
--- a/gfs-kernel/src/gfs/ops_address.c
+++ b/gfs-kernel/src/gfs/ops_address.c
@@ -396,8 +396,10 @@ gfs_commit_write(struct file *file, struct page *page,
 
 		SetPageUptodate(page);
 
-		if (inode->i_size < file_size)
+		if (inode->i_size < file_size) {
 			i_size_write(inode, file_size);
+			mark_inode_dirty(inode);
+		}
 	} else {
 		error = generic_commit_write(file, page, from, to);
 		if (error)
diff --git a/gfs-kernel/src/gfs/ops_file.c b/gfs-kernel/src/gfs/ops_file.c
index c444fda..4d263ee 100644
--- a/gfs-kernel/src/gfs/ops_file.c
+++ b/gfs-kernel/src/gfs/ops_file.c
@@ -599,7 +599,7 @@ do_write_direct_alloc(struct file *file, char *buf, size_t size, loff_t *offset,
 	 * 2. does gfs_log_flush_glock flush data ?
 	 */
 	if (file->f_flags & O_SYNC)
-		gfs_log_flush_glock(ip->i_gl);
+		gfs_log_flush_glock(ip->i_gl, 0);
 
 	gfs_inplace_release(ip);
 	gfs_quota_unlock_m(ip);
@@ -921,7 +921,7 @@ do_do_write_buf(struct file *file, char *buf, size_t size, loff_t *offset,
 	gfs_trans_end(sdp);
 
 	if (file->f_flags & O_SYNC || IS_SYNC(inode)) {
-		gfs_log_flush_glock(ip->i_gl);
+		gfs_log_flush_glock(ip->i_gl, 0);
 		error = filemap_fdatawrite(file->f_mapping);
 		if (error == 0)
 			error = filemap_fdatawait(file->f_mapping);
@@ -1587,7 +1587,7 @@ gfs_fsync(struct file *file, struct dentry *dentry, int datasync)
 		return error;
 
 	if (gfs_is_jdata(ip))
-		gfs_log_flush_glock(ip->i_gl);
+		gfs_log_flush_glock(ip->i_gl, 0);
 	else {
 		if ((!datasync) || (inode->i_state & I_DIRTY_DATASYNC)) {
 			struct writeback_control wbc = {
diff --git a/gfs-kernel/src/gfs/ops_fstype.c b/gfs-kernel/src/gfs/ops_fstype.c
index 2051524..de23cbd 100644
--- a/gfs-kernel/src/gfs/ops_fstype.c
+++ b/gfs-kernel/src/gfs/ops_fstype.c
@@ -131,6 +131,7 @@ fill_super(struct super_block *sb, void *data, int silent)
 	INIT_LIST_HEAD(&sdp->sd_log_ail);
 	INIT_LIST_HEAD(&sdp->sd_log_incore);
 	init_rwsem(&sdp->sd_log_lock);
+	init_MUTEX(&sdp->sd_log_flush_lock);
 	INIT_LIST_HEAD(&sdp->sd_unlinked_list);
 	spin_lock_init(&sdp->sd_unlinked_lock);
 	INIT_LIST_HEAD(&sdp->sd_quota_list);
@@ -632,8 +633,10 @@ fill_super(struct super_block *sb, void *data, int silent)
  fail_glockd:
 	clear_bit(SDF_GLOCKD_RUN, &sdp->sd_flags);
 	wake_up(&sdp->sd_reclaim_wchan);
-	while (sdp->sd_glockd_num--)
+	while (sdp->sd_glockd_num) {
 		wait_for_completion(&sdp->sd_thread_completion);
+		sdp->sd_glockd_num--;
+	}
 
 	down(&sdp->sd_thread_lock);
 	clear_bit(SDF_SCAND_RUN, &sdp->sd_flags);
diff --git a/gfs-kernel/src/gfs/ops_super.c b/gfs-kernel/src/gfs/ops_super.c
index d052883..9acabb4 100644
--- a/gfs-kernel/src/gfs/ops_super.c
+++ b/gfs-kernel/src/gfs/ops_super.c
@@ -54,7 +54,7 @@ gfs_write_inode(struct inode *inode, int sync)
 	atomic_inc(&ip->i_sbd->sd_ops_super);
 
 	if (ip && sync)
-		gfs_log_flush_glock(ip->i_gl);
+		gfs_log_flush_glock(ip->i_gl, 0);
 
 	return 0;
 }
@@ -140,8 +140,10 @@ gfs_put_super(struct super_block *sb)
 	/*  Kill off the glockd threads  */
 	clear_bit(SDF_GLOCKD_RUN, &sdp->sd_flags);
 	wake_up(&sdp->sd_reclaim_wchan);
-	while (sdp->sd_glockd_num--)
+	while (sdp->sd_glockd_num) {
 		wait_for_completion(&sdp->sd_thread_completion);
+		sdp->sd_glockd_num--;
+	}
 
 	/*  Kill off the scand thread  */
 	down(&sdp->sd_thread_lock);
diff --git a/gfs-kernel/src/gfs/quota.c b/gfs-kernel/src/gfs/quota.c
index ce90b77..0e91a34 100644
--- a/gfs-kernel/src/gfs/quota.c
+++ b/gfs-kernel/src/gfs/quota.c
@@ -617,7 +617,7 @@ do_quota_sync(struct gfs_sbd *sdp, struct gfs_quota_data **qda,
 
 	kfree(ghs);
 
-	gfs_log_flush_glock(ip->i_gl);
+	gfs_log_flush_glock(ip->i_gl, 0);
 
 	return 0;
 
diff --git a/gfs-kernel/src/gfs/recovery.c b/gfs-kernel/src/gfs/recovery.c
index 5c2e8e1..2ecee04 100644
--- a/gfs-kernel/src/gfs/recovery.c
+++ b/gfs-kernel/src/gfs/recovery.c
@@ -24,6 +24,7 @@
 #include "glock.h"
 #include "glops.h"
 #include "lm.h"
+#include "log.h"
 #include "lops.h"
 #include "recovery.h"
 
@@ -251,6 +252,7 @@ gfs_find_jhead(struct gfs_sbd *sdp, struct gfs_jindex *jdesc,
 	seg1 = 0;
 	seg2 = jdesc->ji_nsegment - 1;
 
+	gfs_log_lock(sdp);
 	for (;;) {
 		seg_m = (seg1 + seg2) / 2;
 
@@ -262,6 +264,7 @@ gfs_find_jhead(struct gfs_sbd *sdp, struct gfs_jindex *jdesc,
 			error = verify_jhead(sdp, jdesc, gl, &lh1);
 			if (unlikely(error)) {
 				printk("GFS: verify_jhead error=%d\n", error);
+				gfs_log_unlock(sdp);
 				return error;
 			}
 			memcpy(head, &lh1, sizeof(struct gfs_log_header));
@@ -278,6 +281,7 @@ gfs_find_jhead(struct gfs_sbd *sdp, struct gfs_jindex *jdesc,
 			seg2 = seg_m;
 	}
 
+	gfs_log_unlock(sdp);
 	return error;
 }
 
diff --git a/gfs-kernel/src/gfs/trans.c b/gfs-kernel/src/gfs/trans.c
index a0b2204..56aa336 100644
--- a/gfs-kernel/src/gfs/trans.c
+++ b/gfs-kernel/src/gfs/trans.c
@@ -88,6 +88,9 @@ gfs_trans_begin_i(struct gfs_sbd *sdp,
 	unsigned int blocks;
 	int error;
 
+	if (test_bit(SDF_ROFS, &sdp->sd_flags))
+		return -EROFS;
+
 	tr = kmalloc(sizeof(struct gfs_trans), GFP_KERNEL);
 	if (!tr)
 		return -ENOMEM;
@@ -110,12 +113,6 @@ gfs_trans_begin_i(struct gfs_sbd *sdp,
 	if (error)
 		goto fail_holder_put;
 
-	if (test_bit(SDF_ROFS, &sdp->sd_flags)) {
-		tr->tr_t_gh->gh_flags |= GL_NOCACHE;
-		error = -EROFS;
-		goto fail_gunlock;
-	}
-
 	/*  Do log reservation  */
 
 	tr->tr_mblks_asked = meta_blocks;


^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2009-03-24 18:54 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-03-24 18:54 cluster: RHEL47 - NFS over GFS issue (fatal: assertion "!bd->bd_pinned && !buffer_busy(bh)" failed) Bob Peterson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).