From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 6084 invoked by alias); 17 Apr 2008 14:15:14 -0000 Received: (qmail 6047 invoked by uid 9476); 17 Apr 2008 14:15:13 -0000 Date: Thu, 17 Apr 2008 14:15:00 -0000 Message-ID: <20080417141512.6031.qmail@sourceware.org> From: lhh@sourceware.org To: cluster-cvs@sources.redhat.com, cluster-devel@redhat.com Subject: Cluster Project branch, RHEL5, updated. cmirror_1_1_15-52-gb2686ff X-Git-Refname: refs/heads/RHEL5 X-Git-Reftype: branch X-Git-Oldrev: 1e15a60fe192ae8ba3fd608db6af777e947c8a5e X-Git-Newrev: b2686ffe984c517110b949d604c54a71800b67c9 Mailing-List: contact cluster-cvs-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Post: List-Help: , Sender: cluster-cvs-owner@sourceware.org X-SW-Source: 2008-q2/txt/msg00133.txt.bz2 This is an automated email from the git hooks/post-receive script. It was generated because a ref change was pushed to the repository containing the project "Cluster Project". http://sources.redhat.com/git/gitweb.cgi?p=cluster.git;a=commitdiff;h=b2686ffe984c517110b949d604c54a71800b67c9 The branch, RHEL5 has been updated via b2686ffe984c517110b949d604c54a71800b67c9 (commit) via 4bc8e7b01dff841358666e7596b6726a880e7b62 (commit) from 1e15a60fe192ae8ba3fd608db6af777e947c8a5e (commit) Those revisions listed above that are new to this repository have not appeared on any other notification email; so we list those revisions in full, below. - Log ----------------------------------------------------------------- commit b2686ffe984c517110b949d604c54a71800b67c9 Author: Lon Hohberger Date: Thu Apr 17 10:14:51 2008 -0400 [cman] Fix incarnation assignment ordering bug This bug causes "Node X is undead" in a loop; bz 442541 commit 4bc8e7b01dff841358666e7596b6726a880e7b62 Author: Lon Hohberger Date: Wed Apr 2 13:38:54 2008 -0400 [rgmanager] Remove obsolete clushutdown utility ----------------------------------------------------------------------- Summary of changes: cman/qdisk/main.c | 34 ++++++++++++------------ rgmanager/man/clushutdown.8 | 13 --------- rgmanager/src/utils/clushutdown | 53 --------------------------------------- 3 files changed, 17 insertions(+), 83 deletions(-) delete mode 100644 rgmanager/man/clushutdown.8 delete mode 100755 rgmanager/src/utils/clushutdown diff --git a/cman/qdisk/main.c b/cman/qdisk/main.c index b3c29f4..6bec85a 100644 --- a/cman/qdisk/main.c +++ b/cman/qdisk/main.c @@ -254,23 +254,6 @@ check_transitions(qd_ctx *ctx, node_info_t *ni, int max, memb_mask_t mask) state_run(ni[x].ni_status.ps_state)) { /* - Write eviction notice if we're the master. - */ - if (ctx->qc_status == S_MASTER) { - clulog(LOG_NOTICE, - "Writing eviction notice for node %d\n", - ni[x].ni_status.ps_nodeid); - qd_write_status(ctx, ni[x].ni_status.ps_nodeid, - S_EVICT, NULL, NULL, NULL); - if (ctx->qc_flags & RF_ALLOW_KILL) { - clulog(LOG_DEBUG, "Telling CMAN to " - "kill the node\n"); - cman_kill_node(ctx->qc_ch, - ni[x].ni_status.ps_nodeid); - } - } - - /* Mark our internal views as dead if nodes miss too many heartbeats... This will cause a master transition if no live master exists. @@ -287,6 +270,23 @@ check_transitions(qd_ctx *ctx, node_info_t *ni, int max, memb_mask_t mask) ni[x].ni_evil_incarnation = ni[x].ni_status.ps_incarnation; + /* + Write eviction notice if we're the master. + */ + if (ctx->qc_status == S_MASTER) { + clulog(LOG_NOTICE, + "Writing eviction notice for node %d\n", + ni[x].ni_status.ps_nodeid); + qd_write_status(ctx, ni[x].ni_status.ps_nodeid, + S_EVICT, NULL, NULL, NULL); + if (ctx->qc_flags & RF_ALLOW_KILL) { + clulog(LOG_DEBUG, "Telling CMAN to " + "kill the node\n"); + cman_kill_node(ctx->qc_ch, + ni[x].ni_status.ps_nodeid); + } + } + /* Clear our master mask for the node after eviction */ if (mask) clear_bit(mask, (ni[x].ni_status.ps_nodeid-1), diff --git a/rgmanager/man/clushutdown.8 b/rgmanager/man/clushutdown.8 deleted file mode 100644 index c63159f..0000000 --- a/rgmanager/man/clushutdown.8 +++ /dev/null @@ -1,13 +0,0 @@ -.TH "clushutdown" "27" "Jan 2005" "" "Red Hat Cluster Suite" -.SH "NAME" -clushutdown \- Cluster Mass Service Shutdown -.SH "DESCRIPTION" -.PP -.B Clushutdown -is responsible for stopping all services and ensuring that none are restarted -when a member goes off line. It is only useful for situations where an -administrator needs to take enough cluster members offline such that the -cluster quorum will be disrupted. This is not required for shutting down a -single member when all other members are online. -.SH "SEE ALSO" -clusvcadm(8) diff --git a/rgmanager/src/utils/clushutdown b/rgmanager/src/utils/clushutdown deleted file mode 100755 index ef3eb72..0000000 --- a/rgmanager/src/utils/clushutdown +++ /dev/null @@ -1,53 +0,0 @@ -#!/bin/bash -# -# Stop all services and prepare the cluster for a TCO. -# -. /etc/init.d/functions - -action $"Ensuring this member is in the Quorum:" clustat -Q -if [ $? -ne 0 ]; then - exit 1 -fi - -echo -echo "WARNING: About to stop ALL services managed by Red Hat Cluster Manager." -echo " This should only be done when maintainence is required on " -echo " enough members to dissolve the Cluster Quorum. This utility" -echo " generally does not need to be run when one cluster member" -echo " requires maintenance. This NEVER needs to be run in two" -echo " member clusters." -echo -echo -n "Continue [yes/NO]? " -read a -if [ "$a" != "YES" -a "$a" != "yes" ]; then - echo - echo Aborted. - exit 0 -fi - -action $"Preparing for global service shutdown:" clusvcadm -u -if [ $? -ne 0 ]; then - exit 1; -fi - -errors=0 -for s in `cludb -m services%service[0-9]+%name | cut -f2 -d=`; do - action "Stopping service $s: " clusvcadm -q -s $s - if [ $? -ne 0 ]; then - exit 1 - fi -done - -echo "All clustered services are stopped." - -action $"Locking service managers:" clusvcadm -l -if [ $? -ne 0 ]; then - exit 1 -fi - -echo -echo $"It is now safe to shut down all cluster members. Be advised that" -echo $"members not controlled by power switches may still reboot when " -echo $"when the cluster quorum is disbanded." -echo -exit 0 hooks/post-receive -- Cluster Project