From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 17616 invoked by alias); 3 Mar 2006 21:52:55 -0000 Received: (qmail 17600 invoked by uid 9453); 3 Mar 2006 21:52:55 -0000 Date: Fri, 03 Mar 2006 21:52:00 -0000 Message-ID: <20060303215255.17598.qmail@sourceware.org> From: teigland@sourceware.org To: cluster-cvs@sources.redhat.com Subject: cluster/group/daemon app.c cpg.c gd_internal.h Mailing-List: contact cluster-cvs-help@sourceware.org; run by ezmlm Precedence: bulk List-Subscribe: List-Post: List-Help: , Sender: cluster-cvs-owner@sourceware.org X-SW-Source: 2006-q1/txt/msg00302.txt.bz2 List-Id: CVSROOT: /cvs/cluster Module name: cluster Changes by: teigland@sourceware.org 2006-03-03 21:52:55 Modified files: group/daemon : app.c cpg.c gd_internal.h Log message: Because cpg leaves are processed asynchronously, we can't use the cpg being changed to send messages; the actual cpg membership may not reflect the nodes we need to send/recv messages with. Stopped and started messages sent during async confchg processing now go through a separate cpg that the groupd daemon joins itself and connects it to all other groupd daemons. Overlapping leave events now work (like you get if multiple nodes run fence_tool leave at about the same time), or leaves that occur while the group is processing a join. Patches: http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/daemon/app.c.diff?cvsroot=cluster&r1=1.11&r2=1.12 http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/daemon/cpg.c.diff?cvsroot=cluster&r1=1.13&r2=1.14 http://sourceware.org/cgi-bin/cvsweb.cgi/cluster/group/daemon/gd_internal.h.diff?cvsroot=cluster&r1=1.25&r2=1.26