public inbox for cluster-cvs@sourceware.org
help / color / mirror / Atom feed
* cluster/group/gfs_controld lock_dlm.h main.c r ...
@ 2006-07-20 20:19 teigland
0 siblings, 0 replies; 7+ messages in thread
From: teigland @ 2006-07-20 20:19 UTC (permalink / raw)
To: cluster-cvs
CVSROOT: /cvs/cluster
Module name: cluster
Changes by: teigland@sourceware.org 2006-07-20 20:19:44
Modified files:
group/gfs_controld: lock_dlm.h main.c recover.c
Log message:
if mount.gfs is unmounting/leaving the group because the kernel mount
failed, then don't wait for the kernel mount to complete before doing
the leave
Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/gfs_controld/lock_dlm.h.diff?cvsroot=cluster&r1=1.6&r2=1.7
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/gfs_controld/main.c.diff?cvsroot=cluster&r1=1.5&r2=1.6
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/gfs_controld/recover.c.diff?cvsroot=cluster&r1=1.3&r2=1.4
^ permalink raw reply [flat|nested] 7+ messages in thread
* cluster/group/gfs_controld lock_dlm.h main.c r ...
@ 2006-12-05 22:19 teigland
0 siblings, 0 replies; 7+ messages in thread
From: teigland @ 2006-12-05 22:19 UTC (permalink / raw)
To: cluster-cvs
CVSROOT: /cvs/cluster
Module name: cluster
Changes by: teigland@sourceware.org 2006-12-05 22:19:17
Modified files:
group/gfs_controld: lock_dlm.h main.c recover.c
Log message:
Before doing the mount-group portion of withdraw, fork off a dmsetup to
suspend the fs device. This means gfs doesn't need to call dm_suspend()
in the kernel before calling out to us. The suspend waits for all
outstanding i/o to return on the device which is necessary prior to
telling other nodes to do recovery. (Later we should probably swap
in an error table and resume the device.)
bz 215962
Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/gfs_controld/lock_dlm.h.diff?cvsroot=cluster&r1=1.23&r2=1.24
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/gfs_controld/main.c.diff?cvsroot=cluster&r1=1.26&r2=1.27
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/gfs_controld/recover.c.diff?cvsroot=cluster&r1=1.23&r2=1.24
^ permalink raw reply [flat|nested] 7+ messages in thread
* cluster/group/gfs_controld lock_dlm.h main.c r ...
@ 2006-12-05 22:24 teigland
0 siblings, 0 replies; 7+ messages in thread
From: teigland @ 2006-12-05 22:24 UTC (permalink / raw)
To: cluster-cvs
CVSROOT: /cvs/cluster
Module name: cluster
Branch: RHEL5
Changes by: teigland@sourceware.org 2006-12-05 22:24:29
Modified files:
group/gfs_controld: lock_dlm.h main.c recover.c
Log message:
Before doing the mount-group portion of withdraw, fork off a dmsetup to
suspend the fs device. This means gfs doesn't need to call dm_suspend()
in the kernel before calling out to us. The suspend waits for all
outstanding i/o to return on the device which is necessary prior to
telling other nodes to do recovery. (Later we should probably swap
in an error table and resume the device.)
bz 215962
Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/gfs_controld/lock_dlm.h.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.21.2.2&r2=1.21.2.3
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/gfs_controld/main.c.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.18.2.8&r2=1.18.2.9
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/gfs_controld/recover.c.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.23&r2=1.23.2.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* cluster/group/gfs_controld lock_dlm.h main.c r ...
@ 2006-12-05 22:24 teigland
0 siblings, 0 replies; 7+ messages in thread
From: teigland @ 2006-12-05 22:24 UTC (permalink / raw)
To: cluster-cvs
CVSROOT: /cvs/cluster
Module name: cluster
Branch: RHEL50
Changes by: teigland@sourceware.org 2006-12-05 22:24:37
Modified files:
group/gfs_controld: lock_dlm.h main.c recover.c
Log message:
Before doing the mount-group portion of withdraw, fork off a dmsetup to
suspend the fs device. This means gfs doesn't need to call dm_suspend()
in the kernel before calling out to us. The suspend waits for all
outstanding i/o to return on the device which is necessary prior to
telling other nodes to do recovery. (Later we should probably swap
in an error table and resume the device.)
bz 215962
Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/gfs_controld/lock_dlm.h.diff?cvsroot=cluster&only_with_tag=RHEL50&r1=1.21.4.2&r2=1.21.4.3
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/gfs_controld/main.c.diff?cvsroot=cluster&only_with_tag=RHEL50&r1=1.18.4.7&r2=1.18.4.8
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/gfs_controld/recover.c.diff?cvsroot=cluster&only_with_tag=RHEL50&r1=1.23&r2=1.23.4.1
^ permalink raw reply [flat|nested] 7+ messages in thread
* cluster/group/gfs_controld lock_dlm.h main.c r ...
@ 2006-12-20 19:13 teigland
0 siblings, 0 replies; 7+ messages in thread
From: teigland @ 2006-12-20 19:13 UTC (permalink / raw)
To: cluster-cvs
CVSROOT: /cvs/cluster
Module name: cluster
Changes by: teigland@sourceware.org 2006-12-20 19:13:13
Modified files:
group/gfs_controld: lock_dlm.h main.c recover.c
Log message:
Support mounting a single fs on multiple mount points.
bz 218560
Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/gfs_controld/lock_dlm.h.diff?cvsroot=cluster&r1=1.26&r2=1.27
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/gfs_controld/main.c.diff?cvsroot=cluster&r1=1.27&r2=1.28
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/gfs_controld/recover.c.diff?cvsroot=cluster&r1=1.27&r2=1.28
^ permalink raw reply [flat|nested] 7+ messages in thread
* cluster/group/gfs_controld lock_dlm.h main.c r ...
@ 2006-12-20 19:14 teigland
0 siblings, 0 replies; 7+ messages in thread
From: teigland @ 2006-12-20 19:14 UTC (permalink / raw)
To: cluster-cvs
CVSROOT: /cvs/cluster
Module name: cluster
Branch: RHEL5
Changes by: teigland@sourceware.org 2006-12-20 19:14:41
Modified files:
group/gfs_controld: lock_dlm.h main.c recover.c
Log message:
Support mounting a single fs on multiple mount points.
bz 218560
Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/gfs_controld/lock_dlm.h.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.21.2.5&r2=1.21.2.6
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/gfs_controld/main.c.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.18.2.9&r2=1.18.2.10
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/gfs_controld/recover.c.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.23.2.4&r2=1.23.2.5
^ permalink raw reply [flat|nested] 7+ messages in thread
* cluster/group/gfs_controld lock_dlm.h main.c r ...
@ 2006-12-20 19:16 teigland
0 siblings, 0 replies; 7+ messages in thread
From: teigland @ 2006-12-20 19:16 UTC (permalink / raw)
To: cluster-cvs
CVSROOT: /cvs/cluster
Module name: cluster
Branch: RHEL50
Changes by: teigland@sourceware.org 2006-12-20 19:16:21
Modified files:
group/gfs_controld: lock_dlm.h main.c recover.c
Log message:
Support mounting a single fs on multiple mount points.
bz 218560
Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/gfs_controld/lock_dlm.h.diff?cvsroot=cluster&only_with_tag=RHEL50&r1=1.21.4.5&r2=1.21.4.6
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/gfs_controld/main.c.diff?cvsroot=cluster&only_with_tag=RHEL50&r1=1.18.4.8&r2=1.18.4.9
http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/gfs_controld/recover.c.diff?cvsroot=cluster&only_with_tag=RHEL50&r1=1.23.4.4&r2=1.23.4.5
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2006-12-20 19:16 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2006-07-20 20:19 cluster/group/gfs_controld lock_dlm.h main.c r teigland
2006-12-05 22:19 teigland
2006-12-05 22:24 teigland
2006-12-05 22:24 teigland
2006-12-20 19:13 teigland
2006-12-20 19:14 teigland
2006-12-20 19:16 teigland
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).