public inbox for lvm2-cvs@sourceware.org
help / color / mirror / Atom feed
* LVM2 ./WHATS_NEW daemons/clvmd/clvmd-openais.c
@ 2008-06-20 12:46 ccaulfield
  0 siblings, 0 replies; 5+ messages in thread
From: ccaulfield @ 2008-06-20 12:46 UTC (permalink / raw)
  To: lvm-devel, lvm2-cvs

CVSROOT:	/cvs/lvm2
Module name:	LVM2
Changes by:	ccaulfield@sourceware.org	2008-06-20 12:46:22

Modified files:
	.              : WHATS_NEW 
	daemons/clvmd  : clvmd-openais.c 

Log message:
	Make clvmd return immediately if other nodes are down in an openais cluster.

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/WHATS_NEW.diff?cvsroot=lvm2&r1=1.907&r2=1.908
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/clvmd-openais.c.diff?cvsroot=lvm2&r1=1.6&r2=1.7

--- LVM2/WHATS_NEW	2008/06/20 10:58:27	1.907
+++ LVM2/WHATS_NEW	2008/06/20 12:46:21	1.908
@@ -1,5 +1,6 @@
 Version 2.02.39 -
 ================================
+  Make clvmd return immediately if other nodes are down in an openais cluster.
   Make clvmd return immediately if other nodes are down in a gulm cluster.
   Improve/Fix read ahead 'auto' calculation for stripe_size
   Fix lvchange output for -r auto setting if auto is already set
--- LVM2/daemons/clvmd/clvmd-openais.c	2008/04/29 08:55:20	1.6
+++ LVM2/daemons/clvmd/clvmd-openais.c	2008/06/20 12:46:21	1.7
@@ -452,6 +452,7 @@
 {
 	struct dm_hash_node *hn;
 	struct node_info *ninfo;
+	int somedown = 0;
 
 	dm_hash_iterate(hn, node_hash)
 	{
@@ -465,8 +466,10 @@
 
 		if (ninfo->state != NODE_DOWN)
 			callback(master_client, csid, ninfo->state == NODE_CLVMD);
+		if (ninfo->state != NODE_CLVMD)
+			somedown = -1;
 	}
-	return 0;
+	return somedown;
 }
 
 /* Real locking */


^ permalink raw reply	[flat|nested] 5+ messages in thread

* LVM2 ./WHATS_NEW daemons/clvmd/clvmd-openais.c
@ 2008-04-29  8:55 ccaulfield
  0 siblings, 0 replies; 5+ messages in thread
From: ccaulfield @ 2008-04-29  8:55 UTC (permalink / raw)
  To: lvm-devel, lvm2-cvs

CVSROOT:	/cvs/lvm2
Module name:	LVM2
Changes by:	ccaulfield@sourceware.org	2008-04-29 08:55:20

Modified files:
	.              : WHATS_NEW 
	daemons/clvmd  : clvmd-openais.c 

Log message:
	. remove_lock_wait.diff remove the definition of "struct lock_wait",
	which is not used since the switch away from async version saLck
	. num_nodes should equal to member_list_entries, i.e.
	joined_list_entires is 0 when a node leaves the group.
	
	Thanks to Xinwei Hu for the patch.

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/WHATS_NEW.diff?cvsroot=lvm2&r1=1.862&r2=1.863
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/clvmd-openais.c.diff?cvsroot=lvm2&r1=1.5&r2=1.6

--- LVM2/WHATS_NEW	2008/04/28 08:57:11	1.862
+++ LVM2/WHATS_NEW	2008/04/29 08:55:19	1.863
@@ -1,5 +1,6 @@
 Version 2.02.36 - 
 =================================
+  Remove unused struct in clvmd-openais, and use correct node count.
   Fix nodes list in clvmd-openais, and allow for broadcast messages.
   Exclude VG_GLOBAL from internal concurrent VG lock counter.
   Fix vgsplit internal counting of snapshot LVs.
--- LVM2/daemons/clvmd/clvmd-openais.c	2008/04/28 08:57:11	1.5
+++ LVM2/daemons/clvmd/clvmd-openais.c	2008/04/29 08:55:20	1.6
@@ -100,13 +100,6 @@
 	SaNameT              lock_name;
 };
 
-struct lock_wait
-{
-	pthread_cond_t cond;
-	pthread_mutex_t mutex;
-	int status;
-};
-
 /* Set errno to something approximating the right value and return 0 or -1 */
 static int ais_to_errno(SaAisErrorT err)
 {
@@ -313,7 +306,7 @@
 		ninfo->state = NODE_CLVMD;
 	}
 
-	num_nodes = joined_list_entries;
+	num_nodes = member_list_entries;
 }
 
 static int lck_dispatch(struct local_client *client, char *buf, int len,


^ permalink raw reply	[flat|nested] 5+ messages in thread

* LVM2 ./WHATS_NEW daemons/clvmd/clvmd-openais.c
@ 2008-04-28  8:57 ccaulfield
  0 siblings, 0 replies; 5+ messages in thread
From: ccaulfield @ 2008-04-28  8:57 UTC (permalink / raw)
  To: lvm-devel, lvm2-cvs

CVSROOT:	/cvs/lvm2
Module name:	LVM2
Changes by:	ccaulfield@sourceware.org	2008-04-28 08:57:11

Modified files:
	.              : WHATS_NEW 
	daemons/clvmd  : clvmd-openais.c 

Log message:
	The attached patch is a try to make clvmd work correctly on openais stack.
	It does 2 things.
	
	1. The cpg_deliver_callback make a compare between target_nodeid and our_nodeid.
	It turns out openais set target_nodeid to 0 sometimes. for broadcasting ? I change the behavior so that lvm will process_remote also on target_nodeid == 0
	
	2. The joined_list passed to cpg_confchg_callback doesn't include the already exist nodes in the group, which leads to an incomplete node_hash. I simply add all other nodes in member_list to node_hash also.
	
	Thanks to Xinwei Hu for this patch.

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/WHATS_NEW.diff?cvsroot=lvm2&r1=1.861&r2=1.862
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/clvmd-openais.c.diff?cvsroot=lvm2&r1=1.4&r2=1.5

--- LVM2/WHATS_NEW	2008/04/24 02:22:07	1.861
+++ LVM2/WHATS_NEW	2008/04/28 08:57:11	1.862
@@ -1,5 +1,6 @@
 Version 2.02.36 - 
 =================================
+  Fix nodes list in clvmd-openais, and allow for broadcast messages.
   Exclude VG_GLOBAL from internal concurrent VG lock counter.
   Fix vgsplit internal counting of snapshot LVs.
   Fix vgmerge snapshot_count when source VG contains snapshots.
--- LVM2/daemons/clvmd/clvmd-openais.c	2008/04/23 09:53:49	1.4
+++ LVM2/daemons/clvmd/clvmd-openais.c	2008/04/28 08:57:11	1.5
@@ -245,12 +245,13 @@
 
 	memcpy(&target_nodeid, msg, OPENAIS_CSID_LEN);
 
-	DEBUGLOG("Got message from nodeid %d for %d. len %d\n",
-		 nodeid, target_nodeid, msg_len-4);
+	DEBUGLOG("%u got message from nodeid %d for %d. len %d\n",
+		 our_nodeid, nodeid, target_nodeid, msg_len-4);
 
-	if (target_nodeid == our_nodeid)
-		process_message(cluster_client, (char *)msg+OPENAIS_CSID_LEN,
-				msg_len-OPENAIS_CSID_LEN, (char*)&nodeid);
+	if (nodeid != our_nodeid)
+		if (target_nodeid == our_nodeid || target_nodeid == 0)
+			process_message(cluster_client, (char *)msg+OPENAIS_CSID_LEN,
+					msg_len-OPENAIS_CSID_LEN, (char*)&nodeid);
 }
 
 static void cpg_confchg_callback(cpg_handle_t handle,
@@ -292,10 +293,29 @@
 			ninfo->state = NODE_DOWN;
 	}
 
+	for (i=0; i<member_list_entries; i++) {
+		if (member_list[i].nodeid == 0) continue;
+		ninfo = dm_hash_lookup_binary(node_hash,
+				(char *)&member_list[i].nodeid,
+				OPENAIS_CSID_LEN);
+		if (!ninfo) {
+			ninfo = malloc(sizeof(struct node_info));
+			if (!ninfo) {
+				break;
+			}
+			else {
+				ninfo->nodeid = member_list[i].nodeid;
+				dm_hash_insert_binary(node_hash,
+						(char *)&ninfo->nodeid,
+						OPENAIS_CSID_LEN, ninfo);
+			}
+		}
+		ninfo->state = NODE_CLVMD;
+	}
+
 	num_nodes = joined_list_entries;
 }
 
-
 static int lck_dispatch(struct local_client *client, char *buf, int len,
 			const char *csid, struct local_client **new_client)
 {


^ permalink raw reply	[flat|nested] 5+ messages in thread

* LVM2 ./WHATS_NEW daemons/clvmd/clvmd-openais.c
@ 2008-04-23  9:53 ccaulfield
  0 siblings, 0 replies; 5+ messages in thread
From: ccaulfield @ 2008-04-23  9:53 UTC (permalink / raw)
  To: lvm-devel, lvm2-cvs

CVSROOT:	/cvs/lvm2
Module name:	LVM2
Changes by:	ccaulfield@sourceware.org	2008-04-23 09:53:49

Modified files:
	.              : WHATS_NEW 
	daemons/clvmd  : clvmd-openais.c 

Log message:
	Simplify locking code by using saLckResourceLock rather than
	saLckResourceLockAsync.
	
	Thanks to Xinwei Hu for the patch.

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/WHATS_NEW.diff?cvsroot=lvm2&r1=1.857&r2=1.858
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/clvmd-openais.c.diff?cvsroot=lvm2&r1=1.3&r2=1.4

--- LVM2/WHATS_NEW	2008/04/22 12:54:32	1.857
+++ LVM2/WHATS_NEW	2008/04/23 09:53:49	1.858
@@ -1,5 +1,6 @@
 Version 2.02.36 - 
 =================================
+  Simply clvmd-openais by using non-async saLckResourceLock.
   Check lv_count in vg_validate.
   Fix internal LV counter when a snapshot is removed.
   Fix metadata corruption writing lvm1-formatted metadata with snapshots.
--- LVM2/daemons/clvmd/clvmd-openais.c	2007/07/11 12:07:39	1.3
+++ LVM2/daemons/clvmd/clvmd-openais.c	2008/04/23 09:53:49	1.4
@@ -50,11 +50,6 @@
 /* Timeout value for several openais calls */
 #define TIMEOUT 10
 
-static void lck_lock_callback(SaInvocationT invocation,
-			      SaLckLockStatusT lockStatus,
-			      SaAisErrorT error);
-static void lck_unlock_callback(SaInvocationT invocation,
-				SaAisErrorT error);
 static void cpg_deliver_callback (cpg_handle_t handle,
 				  struct cpg_name *groupName,
 				  uint32_t nodeid,
@@ -92,11 +87,6 @@
 	.cpg_confchg_fn =            cpg_confchg_callback,
 };
 
-SaLckCallbacksT lck_callbacks = {
-        .saLckLockGrantCallback      = lck_lock_callback,
-        .saLckResourceUnlockCallback = lck_unlock_callback
-};
-
 struct node_info
 {
 	enum {NODE_UNKNOWN, NODE_DOWN, NODE_UP, NODE_CLVMD} state;
@@ -305,32 +295,6 @@
 	num_nodes = joined_list_entries;
 }
 
-static void lck_lock_callback(SaInvocationT invocation,
-			      SaLckLockStatusT lockStatus,
-			      SaAisErrorT error)
-{
-	struct lock_wait *lwait = (struct lock_wait *)(long)invocation;
-
-	DEBUGLOG("lck_lock_callback, error = %d\n", error);
-
-	lwait->status = error;
-	pthread_mutex_lock(&lwait->mutex);
-	pthread_cond_signal(&lwait->cond);
-	pthread_mutex_unlock(&lwait->mutex);
-}
-
-static void lck_unlock_callback(SaInvocationT invocation,
-				SaAisErrorT error)
-{
-	struct lock_wait *lwait = (struct lock_wait *)(long)invocation;
-
-	DEBUGLOG("lck_unlock_callback\n");
-
-	lwait->status = SA_AIS_OK;
-	pthread_mutex_lock(&lwait->mutex);
-	pthread_cond_signal(&lwait->cond);
-	pthread_mutex_unlock(&lwait->mutex);
-}
 
 static int lck_dispatch(struct local_client *client, char *buf, int len,
 			const char *csid, struct local_client **new_client)
@@ -359,7 +323,7 @@
 	}
 
 	err = saLckInitialize(&lck_handle,
-			      &lck_callbacks,
+					NULL,
 			      &ver);
 	if (err != SA_AIS_OK) {
 		cpg_initialize(&cpg_handle, &cpg_callbacks);
@@ -495,15 +459,11 @@
 /* Real locking */
 static int _lock_resource(char *resource, int mode, int flags, int *lockid)
 {
-	struct lock_wait lwait;
 	struct lock_info *linfo;
 	SaLckResourceHandleT res_handle;
 	SaAisErrorT err;
 	SaLckLockIdT lock_id;
-
-	pthread_cond_init(&lwait.cond, NULL);
-	pthread_mutex_init(&lwait.mutex, NULL);
-	pthread_mutex_lock(&lwait.mutex);
+	SaLckLockStatusT lockStatus;
 
 	/* This needs to be converted from DLM/LVM2 value for OpenAIS LCK */
 	if (flags & LCK_NONBLOCK) flags = SA_LCK_LOCK_NO_QUEUE;
@@ -526,24 +486,24 @@
 		return ais_to_errno(err);
 	}
 
-	err = saLckResourceLockAsync(res_handle,
-				     (SaInvocationT)(long)&lwait,
-				     &lock_id,
-				     mode,
-				     flags,
-				     0);
-	if (err != SA_AIS_OK)
+	err = saLckResourceLock(
+			res_handle,
+			&lock_id,
+			mode,
+			flags,
+			0,
+			SA_TIME_END,
+			&lockStatus);
+	if (err != SA_AIS_OK && lockStatus != SA_LCK_LOCK_GRANTED)
 	{
 		free(linfo);
 		saLckResourceClose(res_handle);
 		return ais_to_errno(err);
 	}
-
+			
 	/* Wait for it to complete */
-	pthread_cond_wait(&lwait.cond, &lwait.mutex);
-	pthread_mutex_unlock(&lwait.mutex);
 
-	DEBUGLOG("lock_resource returning %d, lock_id=%llx\n", lwait.status,
+	DEBUGLOG("lock_resource returning %d, lock_id=%llx\n", err,
 		 lock_id);
 
 	linfo->lock_id = lock_id;
@@ -551,43 +511,34 @@
 
 	dm_hash_insert(lock_hash, resource, linfo);
 
-	return ais_to_errno(lwait.status);
+	return ais_to_errno(err);
 }
 
 
 static int _unlock_resource(char *resource, int lockid)
 {
-	struct lock_wait lwait;
 	SaAisErrorT err;
 	struct lock_info *linfo;
 
-	pthread_cond_init(&lwait.cond, NULL);
-	pthread_mutex_init(&lwait.mutex, NULL);
-	pthread_mutex_lock(&lwait.mutex);
-
 	DEBUGLOG("unlock_resource %s\n", resource);
 	linfo = dm_hash_lookup(lock_hash, resource);
 	if (!linfo)
 		return 0;
 
 	DEBUGLOG("unlock_resource: lockid: %llx\n", linfo->lock_id);
-	err = saLckResourceUnlockAsync((SaInvocationT)(long)&lwait, linfo->lock_id);
+	err = saLckResourceUnlock(linfo->lock_id, SA_TIME_END);
 	if (err != SA_AIS_OK)
 	{
 		DEBUGLOG("Unlock returned %d\n", err);
 		return ais_to_errno(err);
 	}
 
-	/* Wait for it to complete */
-	pthread_cond_wait(&lwait.cond, &lwait.mutex);
-	pthread_mutex_unlock(&lwait.mutex);
-
 	/* Release the resource */
 	dm_hash_remove(lock_hash, resource);
 	saLckResourceClose(linfo->res_handle);
 	free(linfo);
 
-	return ais_to_errno(lwait.status);
+	return ais_to_errno(err);
 }
 
 static int _sync_lock(const char *resource, int mode, int flags, int *lockid)


^ permalink raw reply	[flat|nested] 5+ messages in thread

* LVM2 ./WHATS_NEW daemons/clvmd/clvmd-openais.c
@ 2007-07-11 12:07 pcaulfield
  0 siblings, 0 replies; 5+ messages in thread
From: pcaulfield @ 2007-07-11 12:07 UTC (permalink / raw)
  To: lvm-devel, lvm2-cvs

CVSROOT:	/cvs/lvm2
Module name:	LVM2
Changes by:	pcaulfield@sourceware.org	2007-07-11 12:07:39

Modified files:
	.              : WHATS_NEW 
	daemons/clvmd  : clvmd-openais.c 

Log message:
	Tidy bits of clvmd-openais and improve an error report.

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/WHATS_NEW.diff?cvsroot=lvm2&r1=1.651&r2=1.652
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/clvmd-openais.c.diff?cvsroot=lvm2&r1=1.2&r2=1.3

--- LVM2/WHATS_NEW	2007/07/10 18:50:02	1.651
+++ LVM2/WHATS_NEW	2007/07/11 12:07:39	1.652
@@ -1,5 +1,6 @@
 Version 2.02.27 - 
 ================================
+  Tidy clvmd-openais of redundant bits, and improve an error report.
   Cope with find_seg_by_le() failure in check_lv_segments().
   Call dev_iter_destroy() if _process_all_devs() is interrupted by sigint.
   Add vg_mda_count and pv_mda_count columns to reports.
--- LVM2/daemons/clvmd/clvmd-openais.c	2007/06/25 09:02:37	1.2
+++ LVM2/daemons/clvmd/clvmd-openais.c	2007/07/11 12:07:39	1.3
@@ -37,7 +37,6 @@
 
 #include <openais/saAis.h>
 #include <openais/saLck.h>
-#include <openais/saClm.h>
 #include <openais/cpg.h>
 
 #include "list.h"
@@ -346,7 +345,6 @@
 	SaAisErrorT err;
 	SaVersionT  ver = { 'B', 1, 1 };
 	int select_fd;
-	SaClmClusterNodeT cluster_node;
 
 	node_hash = dm_hash_create(100);
 	lock_hash = dm_hash_create(10);
@@ -379,19 +377,18 @@
 		cpg_finalize(cpg_handle);
 		saLckFinalize(lck_handle);
 		syslog(LOG_ERR, "Cannot join clvmd process group");
-		DEBUGLOG("Cannot join clvmd process group\n");
+		DEBUGLOG("Cannot join clvmd process group: %d\n", err);
 		return ais_to_errno(err);
 	}
 
 	err = cpg_local_get(cpg_handle,
-			    &cluster_node);
+			    &our_nodeid);
 	if (err != SA_AIS_OK) {
 		cpg_finalize(cpg_handle);
 		saLckFinalize(lck_handle);
 		syslog(LOG_ERR, "Cannot get local node id\n");
 		return ais_to_errno(err);
 	}
-	our_nodeid = cluster_node.nodeId;
 	DEBUGLOG("Our local node id is %d\n", our_nodeid);
 
 	saLckSelectionObjectGet(lck_handle, (SaSelectionObjectT *)&select_fd);


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2008-06-20 12:46 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-06-20 12:46 LVM2 ./WHATS_NEW daemons/clvmd/clvmd-openais.c ccaulfield
  -- strict thread matches above, loose matches on Subject: below --
2008-04-29  8:55 ccaulfield
2008-04-28  8:57 ccaulfield
2008-04-23  9:53 ccaulfield
2007-07-11 12:07 pcaulfield

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).