public inbox for cluster-cvs@sourceware.org
help / color / mirror / Atom feed
* cluster: STABLE3 - doc: update usage.txt
@ 2009-07-23 18:04 David Teigland
  0 siblings, 0 replies; only message in thread
From: David Teigland @ 2009-07-23 18:04 UTC (permalink / raw)
  To: cluster-cvs-relay

Gitweb:        http://git.fedorahosted.org/git/cluster.git?p=cluster.git;a=commitdiff;h=ab181b7303ccd66ab4bd67a08ed06136b4a20a93
Commit:        ab181b7303ccd66ab4bd67a08ed06136b4a20a93
Parent:        bac5088a3ae7c03753c9c25f5a9799d67fbdb2ca
Author:        David Teigland <teigland@redhat.com>
AuthorDate:    Thu Jul 23 12:41:16 2009 -0500
Committer:     David Teigland <teigland@redhat.com>
CommitterDate: Thu Jul 23 12:44:03 2009 -0500

doc: update usage.txt

for cluster3

Signed-off-by: David Teigland <teigland@redhat.com>
---
 doc/usage.txt |  201 ++++++++++++++++-----------------------------------------
 1 files changed, 57 insertions(+), 144 deletions(-)

diff --git a/doc/usage.txt b/doc/usage.txt
index f9e2866..ad53a95 100644
--- a/doc/usage.txt
+++ b/doc/usage.txt
@@ -1,177 +1,90 @@
-How to install and run GFS.
-
-Refer to the cluster project page for the latest information.
-http://sources.redhat.com/cluster/
-
-
-Install
--------
-
-Install a Linux kernel with GFS2, DLM, configfs, IPV6 and SCTP,
-  2.6.23-rc1 or later
-
-  If you want to use gfs1 (from cluster/gfs-kernel), then you need to
-  export three additional symbols from gfs2 by adding the following lines
-  to the end of linux/fs/gfs2/locking.c:
-     EXPORT_SYMBOL_GPL(gfs2_unmount_lockproto);
-     EXPORT_SYMBOL_GPL(gfs2_mount_lockproto);
-     EXPORT_SYMBOL_GPL(gfs2_withdraw_lockproto);
-
-Install openais
-  get the latest "whitetank" (stable) release from
-  http://openais.org/
-  or
-  svn checkout http://svn.osdl.org/openais
-  cd openais/branches/whitetank
-  make; make install DESTDIR=/
-
-Install gfs/dlm/fencing/etc components
-  get the latest cluster-2.xx.yy tarball from
-  ftp://sources.redhat.com/pub/cluster/
-  or
-  cvs -d :pserver:cvs@sources.redhat.com:/cvs/cluster login cvs
-  cvs -d :pserver:cvs@sources.redhat.com:/cvs/cluster checkout cluster
-  the password is "cvs"
-  cd cluster
-  ./configure --kernel_src=/path/to/kernel
-  make install
-
-  NOTE: On 64-bit systems, you will usually need to add '--libdir=/usr/lib64'
-        to the configure line.
-
-Install LVM2/CLVM (optional)
-  cvs -d :pserver:cvs@sources.redhat.com:/cvs/lvm2 login cvs
-  cvs -d :pserver:cvs@sources.redhat.com:/cvs/lvm2 checkout LVM2
-  cvs -d :pserver:cvs@sources.redhat.com:/cvs/lvm2
-  the password is "cvs"
-  cd LVM2
-  ./configure --with-clvmd=cman --with-cluster=shared
-  make; make install
-
-  NOTE: On 64-bit systems, you will usually need to add '--libdir=/usr/lib64'
-        to the configure line.
-
-Load kernel modules
--------------------
-
-modprobe gfs2
-modprobe gfs
-modprobe lock_dlm
-modprobe lock_nolock
-modprobe dlm
-
-
-Configuration
--------------
-
-Create /etc/cluster/cluster.conf and copy it to all nodes.
-
-  The format and content of cluster.conf has changed little since the
-  last generation of the software.  See old example here:
-  http://sources.redhat.com/cluster/doc/usage.txt
-  The one change you will need to make is to add nodeids for all nodes
-  in the cluster. These are now mandatory. eg:
-
-     <clusternode name="node12.mycluster.mycompany.com" votes="1" nodeid="12">
-
-  If you already have a cluster.conf file with no nodeids in it, then you can
-  use the 'ccs_tool addnodeids' command to add them.
+cluster3 minimal setup and usage
 
+cluster configuration
+---------------------
 
-Example cluster.conf
---------------------
+Create /etc/cluster/cluster.conf and copy it to all nodes.
 
-This is a basic cluster.conf file that requires manual fencing.  The node
-names should resolve to the address on the network interface you want to
-use for openais/cman/dlm communication.
+Below is a minimal cluster.conf file using manual fencing.  The node names
+should resolve to the address on the network interface you want to use for
+cluster communication.
 
 <?xml version="1.0"?>
 <cluster name="alpha" config_version="1">
 
 <clusternodes>
-<clusternode name="node01" nodeid="1">
-        <fence>
-        </fence>
-</clusternode>
-
-<clusternode name="node02" nodeid="2">
-        <fence>
-        </fence>
-</clusternode>
-
-<clusternode name="node03" nodeid="3">
-        <fence>
-        </fence>
-</clusternode>
+<clusternode name="node-01" nodeid="1"/>
+<clusternode name="node-02" nodeid="2"/>
+<clusternode name="node-03" nodeid="3"/>
 </clusternodes>
 
-<fencedevices>
-</fencedevices>
-
 </cluster>
 
 
-Startup procedure
------------------
+cluster start
+-------------
+
+Use the init script on all nodes:
+
+> service cman start
 
-Run these commands on each cluster node:
+Or, minimal manual steps:
 
+> modprobe configfs
+> modprobe dlm
+> modprobe gfs2 (if using gfs2)
 > mount -t configfs none /sys/kernel/config
-> ccsd
 > cman_tool join
-> groupd
 > fenced
-> fence_tool join
 > dlm_controld
-> gfs_controld
-> clvmd (optional)
-> mkfs -t gfs2 -p lock_dlm -t <clustername>:<fsname> -j <#journals> <blockdev>
-> mount -t gfs2 [-v] <blockdev> <mountpoint>
+> gfs_controld (if using gfs2)
+> fence_tool join
+
+
+using clvm
+----------
+
+Use the init script on all nodes:
+
+> service clvmd start
+
+Or, manually:
+
+> clvmd
+> vgscan
+> vgchange -aly
+
+
+using rgmanager
+---------------
+
+Use the init script on all nodes:
 
-Notes:
-- replace "gfs2" with "gfs" above to use gfs1 instead of gfs2
-- <clustername> in mkfs should match the one in cluster.conf.
-- <fsname> in mkfs is any name you pick, each fs must have a different name.
-- <#journals> in mkfs should be greater than or equal to the number of nodes
-  that you want to mount this fs, each node uses a separate journal.
-- To avoid unnecessary fencing when starting the cluster, it's best for
-  all nodes to join the cluster (complete cman_tool join) before any
-  of them do fence_tool join.
-- The cman_tool "status" and "nodes" options show the status and members
-  of the cluster.
-- The group_tool command shows the status of fencing, dlm and gfs groups
-  that the local node is part of.
-- The "cman" init script can be used for starting everything up through
-  gfs_controld in the list above.
+> service rgmanager start
 
+Or, manually:
 
-Shutdown procedure
-------------------
+> rgmanager
 
-Run these commands on each cluster node:
+Create services/resources to be managed in cluster.conf.
 
-> umount [-v] <mountpoint>
-> fence_tool leave
-> cman_tool leave
 
+using gfs2
+----------
 
-Converting from GFS1 to GFS2
-----------------------------
+Create new file systems, using the cluster name from cluster.conf.  Pick a
+unique name for each fs and select a number of journals greater than or equal
+to the number of nodes that will mount the fs.
 
-If you have GFS1 filesystems that you need to convert to GFS2, follow
-this procedure:
+> mkfs.gfs2 -p lock_dlm -t <clustername>:<fsname> -j <#journals> <blockdev>
 
-1. Back up your entire filesystem first.
-   e.g. cp /dev/your_vg/lvol0 /your_gfs_backup
+Use the gfs2 init script to automate mounting gfs2 fs's listed in /etc/fstab:
 
-2. Run fsck to ensure filesystem integrity.
-   e.g. gfs2_fsck /dev/your_vg/lvol0
+> service gfs2 start
 
-3. Make sure the filesystem is not mounted from any node.
-   e.g. for i in `grep "<clusternode name" /etc/cluster/cluster.conf | cut -d '"' -f2` ; do ssh $i "mount | grep gfs" ; done
+Or, manually:
 
-4. Make sure you have the latest software versions.
+> mount -t gfs2 <blockdev> <mountpoint>
 
-5. Run gfs2_convert <blockdev> from one of the nodes.
-   e.g. gfs2_convert /dev/your_vg/lvol0
+(Replace "gfs2" with "gfs" everywhere above to use gfs instead of gfs2.)
 


^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2009-07-23 18:04 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-07-23 18:04 cluster: STABLE3 - doc: update usage.txt David Teigland

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).