From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 10168 invoked by alias); 23 Jul 2009 18:04:35 -0000 Received: (qmail 10070 invoked by alias); 23 Jul 2009 18:04:35 -0000 X-SWARE-Spam-Status: No, hits=-2.0 required=5.0 tests=AWL,BAYES_00,SPF_HELO_PASS X-Spam-Status: No, hits=-2.0 required=5.0 tests=AWL,BAYES_00,SPF_HELO_PASS X-Spam-Check-By: sourceware.org X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on bastion2.fedora.phx.redhat.com Subject: cluster: STABLE3 - doc: update usage.txt To: cluster-cvs-relay@redhat.com X-Project: Cluster Project X-Git-Module: cluster.git X-Git-Refname: refs/heads/STABLE3 X-Git-Reftype: branch X-Git-Oldrev: bac5088a3ae7c03753c9c25f5a9799d67fbdb2ca X-Git-Newrev: ab181b7303ccd66ab4bd67a08ed06136b4a20a93 From: David Teigland Message-Id: <20090723175220.0B84112022F@lists.fedorahosted.org> Date: Thu, 23 Jul 2009 18:04:00 -0000 X-Scanned-By: MIMEDefang 2.58 on 172.16.52.254 Mailing-List: contact cluster-cvs-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Post: List-Help: , Sender: cluster-cvs-owner@sourceware.org X-SW-Source: 2009-q3/txt/msg00089.txt.bz2 Gitweb: http://git.fedorahosted.org/git/cluster.git?p=cluster.git;a=commitdiff;h=ab181b7303ccd66ab4bd67a08ed06136b4a20a93 Commit: ab181b7303ccd66ab4bd67a08ed06136b4a20a93 Parent: bac5088a3ae7c03753c9c25f5a9799d67fbdb2ca Author: David Teigland AuthorDate: Thu Jul 23 12:41:16 2009 -0500 Committer: David Teigland CommitterDate: Thu Jul 23 12:44:03 2009 -0500 doc: update usage.txt for cluster3 Signed-off-by: David Teigland --- doc/usage.txt | 201 ++++++++++++++++----------------------------------------- 1 files changed, 57 insertions(+), 144 deletions(-) diff --git a/doc/usage.txt b/doc/usage.txt index f9e2866..ad53a95 100644 --- a/doc/usage.txt +++ b/doc/usage.txt @@ -1,177 +1,90 @@ -How to install and run GFS. - -Refer to the cluster project page for the latest information. -http://sources.redhat.com/cluster/ - - -Install -------- - -Install a Linux kernel with GFS2, DLM, configfs, IPV6 and SCTP, - 2.6.23-rc1 or later - - If you want to use gfs1 (from cluster/gfs-kernel), then you need to - export three additional symbols from gfs2 by adding the following lines - to the end of linux/fs/gfs2/locking.c: - EXPORT_SYMBOL_GPL(gfs2_unmount_lockproto); - EXPORT_SYMBOL_GPL(gfs2_mount_lockproto); - EXPORT_SYMBOL_GPL(gfs2_withdraw_lockproto); - -Install openais - get the latest "whitetank" (stable) release from - http://openais.org/ - or - svn checkout http://svn.osdl.org/openais - cd openais/branches/whitetank - make; make install DESTDIR=/ - -Install gfs/dlm/fencing/etc components - get the latest cluster-2.xx.yy tarball from - ftp://sources.redhat.com/pub/cluster/ - or - cvs -d :pserver:cvs@sources.redhat.com:/cvs/cluster login cvs - cvs -d :pserver:cvs@sources.redhat.com:/cvs/cluster checkout cluster - the password is "cvs" - cd cluster - ./configure --kernel_src=/path/to/kernel - make install - - NOTE: On 64-bit systems, you will usually need to add '--libdir=/usr/lib64' - to the configure line. - -Install LVM2/CLVM (optional) - cvs -d :pserver:cvs@sources.redhat.com:/cvs/lvm2 login cvs - cvs -d :pserver:cvs@sources.redhat.com:/cvs/lvm2 checkout LVM2 - cvs -d :pserver:cvs@sources.redhat.com:/cvs/lvm2 - the password is "cvs" - cd LVM2 - ./configure --with-clvmd=cman --with-cluster=shared - make; make install - - NOTE: On 64-bit systems, you will usually need to add '--libdir=/usr/lib64' - to the configure line. - -Load kernel modules -------------------- - -modprobe gfs2 -modprobe gfs -modprobe lock_dlm -modprobe lock_nolock -modprobe dlm - - -Configuration -------------- - -Create /etc/cluster/cluster.conf and copy it to all nodes. - - The format and content of cluster.conf has changed little since the - last generation of the software. See old example here: - http://sources.redhat.com/cluster/doc/usage.txt - The one change you will need to make is to add nodeids for all nodes - in the cluster. These are now mandatory. eg: - - - - If you already have a cluster.conf file with no nodeids in it, then you can - use the 'ccs_tool addnodeids' command to add them. +cluster3 minimal setup and usage +cluster configuration +--------------------- -Example cluster.conf --------------------- +Create /etc/cluster/cluster.conf and copy it to all nodes. -This is a basic cluster.conf file that requires manual fencing. The node -names should resolve to the address on the network interface you want to -use for openais/cman/dlm communication. +Below is a minimal cluster.conf file using manual fencing. The node names +should resolve to the address on the network interface you want to use for +cluster communication. - - - - - - - - - - - - - - + + + - - - -Startup procedure ------------------ +cluster start +------------- + +Use the init script on all nodes: + +> service cman start -Run these commands on each cluster node: +Or, minimal manual steps: +> modprobe configfs +> modprobe dlm +> modprobe gfs2 (if using gfs2) > mount -t configfs none /sys/kernel/config -> ccsd > cman_tool join -> groupd > fenced -> fence_tool join > dlm_controld -> gfs_controld -> clvmd (optional) -> mkfs -t gfs2 -p lock_dlm -t : -j <#journals> -> mount -t gfs2 [-v] +> gfs_controld (if using gfs2) +> fence_tool join + + +using clvm +---------- + +Use the init script on all nodes: + +> service clvmd start + +Or, manually: + +> clvmd +> vgscan +> vgchange -aly + + +using rgmanager +--------------- + +Use the init script on all nodes: -Notes: -- replace "gfs2" with "gfs" above to use gfs1 instead of gfs2 -- in mkfs should match the one in cluster.conf. -- in mkfs is any name you pick, each fs must have a different name. -- <#journals> in mkfs should be greater than or equal to the number of nodes - that you want to mount this fs, each node uses a separate journal. -- To avoid unnecessary fencing when starting the cluster, it's best for - all nodes to join the cluster (complete cman_tool join) before any - of them do fence_tool join. -- The cman_tool "status" and "nodes" options show the status and members - of the cluster. -- The group_tool command shows the status of fencing, dlm and gfs groups - that the local node is part of. -- The "cman" init script can be used for starting everything up through - gfs_controld in the list above. +> service rgmanager start +Or, manually: -Shutdown procedure ------------------- +> rgmanager -Run these commands on each cluster node: +Create services/resources to be managed in cluster.conf. -> umount [-v] -> fence_tool leave -> cman_tool leave +using gfs2 +---------- -Converting from GFS1 to GFS2 ----------------------------- +Create new file systems, using the cluster name from cluster.conf. Pick a +unique name for each fs and select a number of journals greater than or equal +to the number of nodes that will mount the fs. -If you have GFS1 filesystems that you need to convert to GFS2, follow -this procedure: +> mkfs.gfs2 -p lock_dlm -t : -j <#journals> -1. Back up your entire filesystem first. - e.g. cp /dev/your_vg/lvol0 /your_gfs_backup +Use the gfs2 init script to automate mounting gfs2 fs's listed in /etc/fstab: -2. Run fsck to ensure filesystem integrity. - e.g. gfs2_fsck /dev/your_vg/lvol0 +> service gfs2 start -3. Make sure the filesystem is not mounted from any node. - e.g. for i in `grep " mount -t gfs2 -5. Run gfs2_convert from one of the nodes. - e.g. gfs2_convert /dev/your_vg/lvol0 +(Replace "gfs2" with "gfs" everywhere above to use gfs instead of gfs2.)