From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 17682 invoked by alias); 18 Apr 2007 19:14:23 -0000 Received: (qmail 17654 invoked by uid 9478); 18 Apr 2007 19:14:22 -0000 Date: Wed, 18 Apr 2007 19:14:00 -0000 Message-ID: <20070418191422.17653.qmail@sourceware.org> From: jbrassow@sourceware.org To: cluster-cvs@sources.redhat.com Subject: cluster/rgmanager/src/resources lvm.sh Mailing-List: contact cluster-cvs-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Post: List-Help: , Sender: cluster-cvs-owner@sourceware.org X-SW-Source: 2007-q2/txt/msg00054.txt.bz2 CVSROOT: /cvs/cluster Module name: cluster Branch: RHEL5 Changes by: jbrassow@sourceware.org 2007-04-18 20:14:22 Modified files: rgmanager/src/resources: lvm.sh Log message: Bug 236580: [HA LVM]: Bringing site back on-line after failure causes pr... Setup: - 2 interconnected sites - each site has a disk and a machine - LVM mirroring is used to mirror the disks from the sites When one site fails, the LVM happily moves over to the second site - removing the failed disk from the VG that was part of the failed site. However, when the failed site is restored and the service attempts to move back to the original machine, it fails because of the conflicts in LVM metadata on the disks. This fix allows the LV to be reactivated on the original node by filtering out the devices which have stale metadata (i.e the device that was removed during the failure). Patches: http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/rgmanager/src/resources/lvm.sh.diff?cvsroot=cluster&only_with_tag=RHEL5&r1=1.1.6.1&r2=1.1.6.2