public inbox for lvm2-cvs@sourceware.org
help / color / mirror / Atom feed
* LVM2 ./WHATS_NEW daemons/clvmd/Makefile.in dae ...
@ 2005-02-21 15:58 pcaulfield
  0 siblings, 0 replies; 9+ messages in thread
From: pcaulfield @ 2005-02-21 15:58 UTC (permalink / raw)
  To: lvm2-cvs

CVSROOT:	/cvs/lvm2
Module name:	LVM2
Changes by:	pcaulfield@sourceware.org	2005-02-21 15:58:06

Modified files:
	.              : WHATS_NEW 
	daemons/clvmd  : Makefile.in clvmd.h 

Log message:
	./configure --enable-debug now enables debugging code in clvmd

Patches:
http://sources.redhat.com/cgi-bin/cvsweb.cgi/LVM2/WHATS_NEW.diff?cvsroot=lvm2&r1=1.189&r2=1.190
http://sources.redhat.com/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/Makefile.in.diff?cvsroot=lvm2&r1=1.9&r2=1.10
http://sources.redhat.com/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/clvmd.h.diff?cvsroot=lvm2&r1=1.4&r2=1.5


^ permalink raw reply	[flat|nested] 9+ messages in thread

* LVM2 ./WHATS_NEW daemons/clvmd/Makefile.in dae ...
@ 2009-02-11 10:13 ccaulfield
  0 siblings, 0 replies; 9+ messages in thread
From: ccaulfield @ 2009-02-11 10:13 UTC (permalink / raw)
  To: lvm-devel, lvm2-cvs

CVSROOT:	/cvs/lvm2
Module name:	LVM2
Changes by:	ccaulfield@sourceware.org	2009-02-11 10:13:21

Modified files:
	.              : WHATS_NEW 
	daemons/clvmd  : Makefile.in clvmd-corosync.c 

Log message:
	Add a fully-functional get_cluster_name() to clvmd corosync interface.

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/WHATS_NEW.diff?cvsroot=lvm2&r1=1.1041&r2=1.1042
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/Makefile.in.diff?cvsroot=lvm2&r1=1.26&r2=1.27
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/clvmd-corosync.c.diff?cvsroot=lvm2&r1=1.4&r2=1.5

--- LVM2/WHATS_NEW	2009/02/10 13:22:18	1.1041
+++ LVM2/WHATS_NEW	2009/02/11 10:13:20	1.1042
@@ -1,5 +1,6 @@
 Version 2.02.45 - 
 ===================================
+  Add a fully-functional get_cluster_name() to clvmd corosync interface.
   Remove duplicate cpg_initialize from clvmd startup.
   Add option to /etc/sysconfig/cluster to select cluster type for clvmd.
   Allow clvmd to start up if its lockspace already exists.
--- LVM2/daemons/clvmd/Makefile.in	2009/02/02 14:34:25	1.26
+++ LVM2/daemons/clvmd/Makefile.in	2009/02/11 10:13:20	1.27
@@ -68,7 +68,7 @@
 
 ifeq ("$(COROSYNC)", "yes")
         SOURCES += clvmd-corosync.c
-        LMLIBS += -lquorum -lcpg -ldlm
+        LMLIBS += -lquorum -lconfdb -lcpg -ldlm
         DEFS += -DUSE_COROSYNC
 endif
 
--- LVM2/daemons/clvmd/clvmd-corosync.c	2009/02/10 13:22:18	1.4
+++ LVM2/daemons/clvmd/clvmd-corosync.c	2009/02/11 10:13:20	1.5
@@ -42,6 +42,7 @@
 #include <corosync/corotypes.h>
 #include <corosync/cpg.h>
 #include <corosync/quorum.h>
+#include <corosync/confdb.h>
 #include <libdlm.h>
 
 #include "locking.h"
@@ -507,7 +508,6 @@
 	return 0;
 }
 
-/* We are always quorate ! */
 static int _is_quorate()
 {
 	int quorate;
@@ -556,10 +556,49 @@
 	return cs_to_errno(err);
 }
 
-/* We don't have a cluster name to report here */
+/*
+ * We are not necessarily connected to a Red Hat Cluster system,
+ * but if we are, this returns the cluster name from cluster.conf.
+ * I've used confdb rather than ccs to reduce the inter-package
+ * dependancies as well as to allow people to set a cluster name
+ * for themselves even if they are not running on RH cluster.
+ */
 static int _get_cluster_name(char *buf, int buflen)
 {
+	confdb_handle_t handle;
+	int result;
+	int namelen = buflen;
+	unsigned int cluster_handle;
+	confdb_callbacks_t callbacks = {
+		.confdb_key_change_notify_fn = NULL,
+		.confdb_object_create_change_notify_fn = NULL,
+		.confdb_object_delete_change_notify_fn = NULL
+	};
+
+	/* This is a default in case everything else fails */
 	strncpy(buf, "Corosync", buflen);
+
+	/* Look for a cluster name in confdb */
+	result = confdb_initialize (&handle, &callbacks);
+        if (result != CS_OK)
+		return 0;
+
+        result = confdb_object_find_start(handle, OBJECT_PARENT_HANDLE);
+	if (result != CS_OK)
+		goto out;
+
+        result = confdb_object_find(handle, OBJECT_PARENT_HANDLE, (void *)"cluster", strlen("cluster"), &cluster_handle);
+        if (result != CS_OK)
+		goto out;
+
+        result = confdb_key_get(handle, cluster_handle, (void *)"name", strlen("name"), buf, &namelen);
+        if (result != CS_OK)
+		goto out;
+
+	buf[namelen] = '\0';
+
+out:
+	confdb_finalize(handle);
 	return 0;
 }
 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* LVM2 ./WHATS_NEW daemons/clvmd/Makefile.in dae ...
@ 2009-02-02 14:34 ccaulfield
  0 siblings, 0 replies; 9+ messages in thread
From: ccaulfield @ 2009-02-02 14:34 UTC (permalink / raw)
  To: lvm-devel, lvm2-cvs

CVSROOT:	/cvs/lvm2
Module name:	LVM2
Changes by:	ccaulfield@sourceware.org	2009-02-02 14:34:25

Modified files:
	.              : WHATS_NEW 
	daemons/clvmd  : Makefile.in clvmd-corosync.c clvmd-openais.c 
	                 clvmd.c 

Log message:
	Allow clvmd to be built with all cluster managers & select one on cmdline.

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/WHATS_NEW.diff?cvsroot=lvm2&r1=1.1031&r2=1.1032
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/Makefile.in.diff?cvsroot=lvm2&r1=1.25&r2=1.26
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/clvmd-corosync.c.diff?cvsroot=lvm2&r1=1.1&r2=1.2
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/clvmd-openais.c.diff?cvsroot=lvm2&r1=1.9&r2=1.10
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/clvmd.c.diff?cvsroot=lvm2&r1=1.53&r2=1.54

--- LVM2/WHATS_NEW	2009/01/29 15:23:15	1.1031
+++ LVM2/WHATS_NEW	2009/02/02 14:34:24	1.1032
@@ -1,6 +1,7 @@
 Version 2.02.45 - 
 ===================================
-  Mention --with-clvmd=corosync in ./configure
+  Allow clvmd to be built with all cluster managers & select one on cmdline.
+  Mention --with-clvmd=corosync in ./configure.
   Replace internal vg_check_status() implementation.
   Rename vg_read() to vg_read_internal().
 
--- LVM2/daemons/clvmd/Makefile.in	2009/01/22 10:21:12	1.25
+++ LVM2/daemons/clvmd/Makefile.in	2009/02/02 14:34:25	1.26
@@ -21,31 +21,26 @@
 	lvm-functions.c  \
 	refresh_clvmd.c
 
-ifeq ("@CLVMD@", "gulm")
+ifneq (,$(findstring gulm,, "@CLVMD@,"))
 	GULM = yes
 endif
 
-ifeq ("@CLVMD@", "cman")
+ifneq (,$(findstring cman,, "@CLVMD@,"))
 	CMAN = yes
 endif
 
-ifeq ("@CLVMD@", "openais")
+ifneq (,$(findstring openais,, "@CLVMD@,"))
 	OPENAIS = yes
-	GULM = no
-	CMAN = no
 endif
 
-ifeq ("@CLVMD@", "all")
-	GULM = yes
-	CMAN = yes
-	OPENAIS = no
-	COROSYNC = no
+ifneq (,$(findstring corosync,, "@CLVMD@,"))
+	COROSYNC = yes
 endif
 
-ifeq ("@CLVMD@", "corosync")
-	GULM = no
-	CMAN = no
-	OPENAIS = no
+ifneq (,$(findstring all,, "@CLVMD@,"))
+	GULM = yes
+	CMAN = yes
+	OPENAIS = yes
 	COROSYNC = yes
 endif
 
--- LVM2/daemons/clvmd/clvmd-corosync.c	2009/01/22 10:21:12	1.1
+++ LVM2/daemons/clvmd/clvmd-corosync.c	2009/02/02 14:34:25	1.2
@@ -194,7 +194,7 @@
 	return -1;
 }
 
-static char *print_csid(const char *csid)
+static char *print_corosync_csid(const char *csid)
 {
 	static char buf[128];
 	int id;
@@ -392,7 +392,7 @@
 	ninfo = dm_hash_lookup_binary(node_hash, csid, COROSYNC_CSID_LEN);
 	if (!ninfo)
 	{
-		sprintf(name, "UNKNOWN %s", print_csid(csid));
+		sprintf(name, "UNKNOWN %s", print_corosync_csid(csid));
 		return -1;
 	}
 
@@ -414,7 +414,7 @@
 	ninfo = dm_hash_lookup_binary(node_hash, csid, COROSYNC_CSID_LEN);
 	if (!ninfo) {
 		DEBUGLOG("corosync_add_up_node no node_hash entry for csid %s\n",
-			 print_csid(csid));
+			 print_corosync_csid(csid));
 		return;
 	}
 
--- LVM2/daemons/clvmd/clvmd-openais.c	2008/11/04 16:41:47	1.9
+++ LVM2/daemons/clvmd/clvmd-openais.c	2009/02/02 14:34:25	1.10
@@ -195,7 +195,7 @@
 	return -1;
 }
 
-static char *print_csid(const char *csid)
+static char *print_openais_csid(const char *csid)
 {
 	static char buf[128];
 	int id;
@@ -415,7 +415,7 @@
 	ninfo = dm_hash_lookup_binary(node_hash, csid, OPENAIS_CSID_LEN);
 	if (!ninfo)
 	{
-		sprintf(name, "UNKNOWN %s", print_csid(csid));
+		sprintf(name, "UNKNOWN %s", print_openais_csid(csid));
 		return -1;
 	}
 
@@ -437,7 +437,7 @@
 	ninfo = dm_hash_lookup_binary(node_hash, csid, OPENAIS_CSID_LEN);
 	if (!ninfo) {
 		DEBUGLOG("openais_add_up_node no node_hash entry for csid %s\n",
-			 print_csid(csid));
+			 print_openais_csid(csid));
 		return;
 	}
 
--- LVM2/daemons/clvmd/clvmd.c	2009/01/22 10:21:12	1.53
+++ LVM2/daemons/clvmd/clvmd.c	2009/02/02 14:34:25	1.54
@@ -108,6 +108,8 @@
 #define DFAIL_TIMEOUT    5
 #define SUCCESS          0
 
+typedef enum {IF_AUTO, IF_CMAN, IF_GULM, IF_OPENAIS, IF_COROSYNC} if_type_t;
+
 /* Prototypes for code further down */
 static void sigusr2_handler(int sig);
 static void sighup_handler(int sig);
@@ -144,6 +146,7 @@
 static void ntoh_clvm(struct clvm_header *hdr);
 static void add_reply_to_list(struct local_client *client, int status,
 			      const char *csid, const char *buf, int len);
+static if_type_t parse_cluster_interface(char *ifname);
 
 static void usage(char *prog, FILE *file)
 {
@@ -158,6 +161,20 @@
 	fprintf(file, "   -C       Sets debug level (from -d) on all clvmd instances clusterwide\n");
 	fprintf(file, "   -t<secs> Command timeout (default 60 seconds)\n");
 	fprintf(file, "   -T<secs> Startup timeout (default none)\n");
+	fprintf(file, "   -I<cmgr> Cluster manager (default: auto)\n");
+	fprintf(file, "            Available cluster managers: ");
+#ifdef USE_COROSYNC
+	fprintf(file, "corosync ");
+#endif
+#ifdef USE_CMAN
+	fprintf(file, "cman ");
+#endif
+#ifdef USE_OPENAIS
+	fprintf(file, "openais ");
+#endif
+#ifdef USE_GULM
+	fprintf(file, "gulm ");
+#endif
 	fprintf(file, "\n");
 }
 
@@ -258,6 +275,7 @@
 	signed char opt;
 	int cmd_timeout = DEFAULT_CMD_TIMEOUT;
 	int start_timeout = 0;
+	if_type_t cluster_iface = IF_AUTO;
 	sigset_t ss;
 	int using_gulm = 0;
 	int debug_opt = 0;
@@ -266,7 +284,7 @@
 	/* Deal with command-line arguments */
 	opterr = 0;
 	optind = 0;
-	while ((opt = getopt(argc, argv, "?vVhd::t:RT:C")) != EOF) {
+	while ((opt = getopt(argc, argv, "?vVhd::t:RT:CI:")) != EOF) {
 		switch (opt) {
 		case 'h':
 			usage(argv[0], stdout);
@@ -299,6 +317,9 @@
 				exit(1);
 			}
 			break;
+		case 'I':
+			cluster_iface = parse_cluster_interface(optarg);
+			break;
 		case 'T':
 			start_timeout = atoi(optarg);
 			if (start_timeout <= 0) {
@@ -365,7 +386,7 @@
 
 	/* Start the cluster interface */
 #ifdef USE_CMAN
-	if ((clops = init_cman_cluster())) {
+	if ((cluster_iface == IF_AUTO || cluster_iface == IF_CMAN) && (clops = init_cman_cluster())) {
 		max_csid_len = CMAN_MAX_CSID_LEN;
 		max_cluster_message = CMAN_MAX_CLUSTER_MESSAGE;
 		max_cluster_member_name_len = CMAN_MAX_NODENAME_LEN;
@@ -374,7 +395,7 @@
 #endif
 #ifdef USE_GULM
 	if (!clops)
-		if ((clops = init_gulm_cluster())) {
+		if ((cluster_iface == IF_AUTO || cluster_iface == IF_GULM) && (clops = init_gulm_cluster())) {
 			max_csid_len = GULM_MAX_CSID_LEN;
 			max_cluster_message = GULM_MAX_CLUSTER_MESSAGE;
 			max_cluster_member_name_len = GULM_MAX_CLUSTER_MEMBER_NAME_LEN;
@@ -382,24 +403,24 @@
 			syslog(LOG_NOTICE, "Cluster LVM daemon started - connected to GULM");
 		}
 #endif
-#ifdef USE_OPENAIS
-	if (!clops)
-		if ((clops = init_openais_cluster())) {
-			max_csid_len = OPENAIS_CSID_LEN;
-			max_cluster_message = OPENAIS_MAX_CLUSTER_MESSAGE;
-			max_cluster_member_name_len = OPENAIS_MAX_CLUSTER_MEMBER_NAME_LEN;
-			syslog(LOG_NOTICE, "Cluster LVM daemon started - connected to OpenAIS");
-		}
-#endif
 #ifdef USE_COROSYNC
 	if (!clops)
-		if ((clops = init_corosync_cluster())) {
+		if (((cluster_iface == IF_AUTO || cluster_iface == IF_COROSYNC) && (clops = init_corosync_cluster()))) {
 			max_csid_len = COROSYNC_CSID_LEN;
 			max_cluster_message = COROSYNC_MAX_CLUSTER_MESSAGE;
 			max_cluster_member_name_len = COROSYNC_MAX_CLUSTER_MEMBER_NAME_LEN;
 			syslog(LOG_NOTICE, "Cluster LVM daemon started - connected to Corosync");
 		}
 #endif
+#ifdef USE_OPENAIS
+	if (!clops)
+		if ((cluster_iface == IF_AUTO || cluster_iface == IF_OPENAIS) && (clops = init_openais_cluster())) {
+			max_csid_len = OPENAIS_CSID_LEN;
+			max_cluster_message = OPENAIS_MAX_CLUSTER_MESSAGE;
+			max_cluster_member_name_len = OPENAIS_MAX_CLUSTER_MEMBER_NAME_LEN;
+			syslog(LOG_NOTICE, "Cluster LVM daemon started - connected to OpenAIS");
+		}
+#endif
 
 	if (!clops) {
 		DEBUGLOG("Can't initialise cluster interface\n");
@@ -2008,3 +2029,20 @@
 	return clops->sync_unlock(resource, lockid);
 }
 
+static if_type_t parse_cluster_interface(char *ifname)
+{
+	if_type_t iface = IF_AUTO;
+
+	if (!strcmp(ifname, "auto"))
+		iface = IF_AUTO;
+	if (!strcmp(ifname, "cman"))
+		iface = IF_CMAN;
+	if (!strcmp(ifname, "gulm"))
+		iface = IF_GULM;
+	if (!strcmp(ifname, "openais"))
+		iface = IF_OPENAIS;
+	if (!strcmp(ifname, "corosync"))
+		iface = IF_COROSYNC;
+
+	return iface;
+}


^ permalink raw reply	[flat|nested] 9+ messages in thread

* LVM2 ./WHATS_NEW daemons/clvmd/Makefile.in dae ...
@ 2009-01-22 10:21 ccaulfield
  0 siblings, 0 replies; 9+ messages in thread
From: ccaulfield @ 2009-01-22 10:21 UTC (permalink / raw)
  To: lvm-devel, lvm2-cvs

CVSROOT:	/cvs/lvm2
Module name:	LVM2
Changes by:	ccaulfield@sourceware.org	2009-01-22 10:21:12

Modified files:
	.              : WHATS_NEW 
	daemons/clvmd  : Makefile.in clvmd-comms.h clvmd.c 
Added files:
	daemons/clvmd  : clvmd-corosync.c 

Log message:
	Add a corosync/DLM cluster service to clvmd.
	
	It's not integrated in the configure system yet though.

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/WHATS_NEW.diff?cvsroot=lvm2&r1=1.1023&r2=1.1024
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/clvmd-corosync.c.diff?cvsroot=lvm2&r1=NONE&r2=1.1
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/Makefile.in.diff?cvsroot=lvm2&r1=1.24&r2=1.25
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/clvmd-comms.h.diff?cvsroot=lvm2&r1=1.8&r2=1.9
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/clvmd.c.diff?cvsroot=lvm2&r1=1.52&r2=1.53

--- LVM2/WHATS_NEW	2009/01/20 17:39:07	1.1023
+++ LVM2/WHATS_NEW	2009/01/22 10:21:12	1.1024
@@ -1,5 +1,6 @@
 Version 2.02.44 - 
 ====================================
+  Add corosync/DLM cluster interface to clvmd
   Add --nameprefixes, --unquoted, --rows to pvs, vgs, lvs man pages.
   Fix fsadm failure with block size != 1K.
   Fix pvs segfault when run with orphan PV and some VG fields.
/cvs/lvm2/LVM2/daemons/clvmd/clvmd-corosync.c,v  -->  standard output
revision 1.1
--- LVM2/daemons/clvmd/clvmd-corosync.c
+++ -	2009-01-22 10:21:13.020592000 +0000
@@ -0,0 +1,591 @@
+/******************************************************************************
+*******************************************************************************
+**
+**  Copyright (C) 2009 Red Hat, Inc. All rights reserved.
+**
+*******************************************************************************
+******************************************************************************/
+
+/* This provides the interface between clvmd and corosync/DLM as the cluster
+ * and lock manager.
+ *
+ */
+
+#define _GNU_SOURCE
+#define _FILE_OFFSET_BITS 64
+
+#include <configure.h>
+#include <pthread.h>
+#include <sys/types.h>
+#include <sys/utsname.h>
+#include <sys/ioctl.h>
+#include <sys/socket.h>
+#include <sys/stat.h>
+#include <sys/file.h>
+#include <sys/socket.h>
+#include <netinet/in.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <signal.h>
+#include <fcntl.h>
+#include <string.h>
+#include <stddef.h>
+#include <stdint.h>
+#include <unistd.h>
+#include <errno.h>
+#include <utmpx.h>
+#include <syslog.h>
+#include <assert.h>
+#include <libdevmapper.h>
+
+#include <corosync/corotypes.h>
+#include <corosync/cpg.h>
+#include <corosync/quorum.h>
+#include <libdlm.h>
+
+#include "locking.h"
+#include "lvm-logging.h"
+#include "clvm.h"
+#include "clvmd-comms.h"
+#include "lvm-functions.h"
+#include "clvmd.h"
+
+/* Timeout value for several corosync calls */
+#define LOCKSPACE_NAME "clvmd"
+
+static void cpg_deliver_callback (cpg_handle_t handle,
+				  struct cpg_name *groupName,
+				  uint32_t nodeid,
+				  uint32_t pid,
+				  void *msg,
+				  int msg_len);
+static void cpg_confchg_callback(cpg_handle_t handle,
+				 struct cpg_name *groupName,
+				 struct cpg_address *member_list, int member_list_entries,
+				 struct cpg_address *left_list, int left_list_entries,
+				 struct cpg_address *joined_list, int joined_list_entries);
+static void _cluster_closedown(void);
+
+/* Hash list of nodes in the cluster */
+static struct dm_hash_table *node_hash;
+
+/* Number of active nodes */
+static int num_nodes;
+static unsigned int our_nodeid;
+
+static struct local_client *cluster_client;
+
+/* Corosync handles */
+static cpg_handle_t cpg_handle;
+static quorum_handle_t quorum_handle;
+
+/* DLM Handle */
+static dlm_lshandle_t *lockspace;
+
+static struct cpg_name cpg_group_name;
+
+/* Corosync callback structs */
+cpg_callbacks_t cpg_callbacks = {
+	.cpg_deliver_fn =            cpg_deliver_callback,
+	.cpg_confchg_fn =            cpg_confchg_callback,
+};
+
+quorum_callbacks_t quorum_callbacks = {
+	.quorum_notify_fn = NULL,
+};
+
+struct node_info
+{
+	enum {NODE_UNKNOWN, NODE_DOWN, NODE_UP, NODE_CLVMD} state;
+	int nodeid;
+};
+
+
+/* Set errno to something approximating the right value and return 0 or -1 */
+static int cs_to_errno(cs_error_t err)
+{
+	switch(err)
+	{
+	case CS_OK:
+		return 0;
+        case CS_ERR_LIBRARY:
+		errno = EINVAL;
+		break;
+        case CS_ERR_VERSION:
+		errno = EINVAL;
+		break;
+        case CS_ERR_INIT:
+		errno = EINVAL;
+		break;
+        case CS_ERR_TIMEOUT:
+		errno = ETIME;
+		break;
+        case CS_ERR_TRY_AGAIN:
+		errno = EAGAIN;
+		break;
+        case CS_ERR_INVALID_PARAM:
+		errno = EINVAL;
+		break;
+        case CS_ERR_NO_MEMORY:
+		errno = ENOMEM;
+		break;
+        case CS_ERR_BAD_HANDLE:
+		errno = EINVAL;
+		break;
+        case CS_ERR_BUSY:
+		errno = EBUSY;
+		break;
+        case CS_ERR_ACCESS:
+		errno = EPERM;
+		break;
+        case CS_ERR_NOT_EXIST:
+		errno = ENOENT;
+		break;
+        case CS_ERR_NAME_TOO_LONG:
+		errno = ENAMETOOLONG;
+		break;
+        case CS_ERR_EXIST:
+		errno = EEXIST;
+		break;
+        case CS_ERR_NO_SPACE:
+		errno = ENOSPC;
+		break;
+        case CS_ERR_INTERRUPT:
+		errno = EINTR;
+		break;
+	case CS_ERR_NAME_NOT_FOUND:
+		errno = ENOENT;
+		break;
+        case CS_ERR_NO_RESOURCES:
+		errno = ENOMEM;
+		break;
+        case CS_ERR_NOT_SUPPORTED:
+		errno = EOPNOTSUPP;
+		break;
+        case CS_ERR_BAD_OPERATION:
+		errno = EINVAL;
+		break;
+        case CS_ERR_FAILED_OPERATION:
+		errno = EIO;
+		break;
+        case CS_ERR_MESSAGE_ERROR:
+		errno = EIO;
+		break;
+        case CS_ERR_QUEUE_FULL:
+		errno = EXFULL;
+		break;
+        case CS_ERR_QUEUE_NOT_AVAILABLE:
+		errno = EINVAL;
+		break;
+        case CS_ERR_BAD_FLAGS:
+		errno = EINVAL;
+		break;
+        case CS_ERR_TOO_BIG:
+		errno = E2BIG;
+		break;
+        case CS_ERR_NO_SECTIONS:
+		errno = ENOMEM;
+		break;
+	default:
+		errno = EINVAL;
+		break;
+	}
+	return -1;
+}
+
+static char *print_csid(const char *csid)
+{
+	static char buf[128];
+	int id;
+
+	memcpy(&id, csid, sizeof(int));
+	sprintf(buf, "%d", id);
+	return buf;
+}
+
+static void cpg_deliver_callback (cpg_handle_t handle,
+				  struct cpg_name *groupName,
+				  uint32_t nodeid,
+				  uint32_t pid,
+				  void *msg,
+				  int msg_len)
+{
+	int target_nodeid;
+
+	memcpy(&target_nodeid, msg, COROSYNC_CSID_LEN);
+
+	DEBUGLOG("%u got message from nodeid %d for %d. len %d\n",
+		 our_nodeid, nodeid, target_nodeid, msg_len-4);
+
+	if (nodeid != our_nodeid)
+		if (target_nodeid == our_nodeid || target_nodeid == 0)
+			process_message(cluster_client, (char *)msg+COROSYNC_CSID_LEN,
+					msg_len-COROSYNC_CSID_LEN, (char*)&nodeid);
+}
+
+static void cpg_confchg_callback(cpg_handle_t handle,
+				 struct cpg_name *groupName,
+				 struct cpg_address *member_list, int member_list_entries,
+				 struct cpg_address *left_list, int left_list_entries,
+				 struct cpg_address *joined_list, int joined_list_entries)
+{
+	int i;
+	struct node_info *ninfo;
+
+	DEBUGLOG("confchg callback. %d joined, %d left, %d members\n",
+		 joined_list_entries, left_list_entries, member_list_entries);
+
+	for (i=0; i<joined_list_entries; i++) {
+		ninfo = dm_hash_lookup_binary(node_hash,
+					      (char *)&joined_list[i].nodeid,
+					      COROSYNC_CSID_LEN);
+		if (!ninfo) {
+			ninfo = malloc(sizeof(struct node_info));
+			if (!ninfo) {
+				break;
+			}
+			else {
+				ninfo->nodeid = joined_list[i].nodeid;
+				dm_hash_insert_binary(node_hash,
+						      (char *)&ninfo->nodeid,
+						      COROSYNC_CSID_LEN, ninfo);
+			}
+		}
+		ninfo->state = NODE_CLVMD;
+	}
+
+	for (i=0; i<left_list_entries; i++) {
+		ninfo = dm_hash_lookup_binary(node_hash,
+					      (char *)&left_list[i].nodeid,
+					      COROSYNC_CSID_LEN);
+		if (ninfo)
+			ninfo->state = NODE_DOWN;
+	}
+
+	for (i=0; i<member_list_entries; i++) {
+		if (member_list[i].nodeid == 0) continue;
+		ninfo = dm_hash_lookup_binary(node_hash,
+				(char *)&member_list[i].nodeid,
+				COROSYNC_CSID_LEN);
+		if (!ninfo) {
+			ninfo = malloc(sizeof(struct node_info));
+			if (!ninfo) {
+				break;
+			}
+			else {
+				ninfo->nodeid = member_list[i].nodeid;
+				dm_hash_insert_binary(node_hash,
+						(char *)&ninfo->nodeid,
+						COROSYNC_CSID_LEN, ninfo);
+			}
+		}
+		ninfo->state = NODE_CLVMD;
+	}
+
+	num_nodes = member_list_entries;
+}
+
+static int _init_cluster(void)
+{
+	cs_error_t err;
+
+	node_hash = dm_hash_create(100);
+
+	err = cpg_initialize(&cpg_handle,
+			     &cpg_callbacks);
+	if (err != CS_OK) {
+		syslog(LOG_ERR, "Cannot initialise Corosync CPG service: %d",
+		       err);
+		DEBUGLOG("Cannot initialise Corosync CPG service: %d", err);
+		return cs_to_errno(err);
+	}
+
+	err = quorum_initialize(&quorum_handle,
+				&quorum_callbacks);
+	if (err != CS_OK) {
+		syslog(LOG_ERR, "Cannot initialise Corosync quorum service: %d",
+		       err);
+		DEBUGLOG("Cannot initialise Corosync quorum service: %d", err);
+		return cs_to_errno(err);
+	}
+
+
+	/* Create a lockspace for LV & VG locks to live in */
+	lockspace = dlm_create_lockspace(LOCKSPACE_NAME, 0600);
+	if (!lockspace) {
+		syslog(LOG_ERR, "Unable to create lockspace for CLVM: %m");
+		quorum_finalize(quorum_handle);
+		return -1;
+	}
+	dlm_ls_pthread_init(lockspace);
+	DEBUGLOG("DLM initialisation complete\n");
+
+	err = cpg_initialize(&cpg_handle, &cpg_callbacks);
+	if (err != CS_OK) {
+		return cs_to_errno(err);
+	}
+
+	/* Connect to the clvmd group */
+	strcpy((char *)cpg_group_name.value, "clvmd");
+	cpg_group_name.length = strlen((char *)cpg_group_name.value);
+	err = cpg_join(cpg_handle, &cpg_group_name);
+	if (err != CS_OK) {
+		cpg_finalize(cpg_handle);
+		quorum_finalize(quorum_handle);
+		dlm_release_lockspace(LOCKSPACE_NAME, lockspace, 0);
+		syslog(LOG_ERR, "Cannot join clvmd process group");
+		DEBUGLOG("Cannot join clvmd process group: %d\n", err);
+		return cs_to_errno(err);
+	}
+
+	err = cpg_local_get(cpg_handle,
+			    &our_nodeid);
+	if (err != CS_OK) {
+		cpg_finalize(cpg_handle);
+		quorum_finalize(quorum_handle);
+		dlm_release_lockspace(LOCKSPACE_NAME, lockspace, 0);
+		syslog(LOG_ERR, "Cannot get local node id\n");
+		return cs_to_errno(err);
+	}
+	DEBUGLOG("Our local node id is %d\n", our_nodeid);
+
+	DEBUGLOG("Connected to Corosync\n");
+
+	return 0;
+}
+
+static void _cluster_closedown(void)
+{
+	DEBUGLOG("cluster_closedown\n");
+	unlock_all();
+
+	dlm_release_lockspace(LOCKSPACE_NAME, lockspace, 0);
+	cpg_finalize(cpg_handle);
+	quorum_finalize(quorum_handle);
+}
+
+static void _get_our_csid(char *csid)
+{
+	memcpy(csid, &our_nodeid, sizeof(int));
+}
+
+/* Corosync doesn't really have nmode names so we
+   just use the node ID in hex instead */
+static int _csid_from_name(char *csid, const char *name)
+{
+	int nodeid;
+	struct node_info *ninfo;
+
+	if (sscanf(name, "%x", &nodeid) == 1) {
+		ninfo = dm_hash_lookup_binary(node_hash, csid, COROSYNC_CSID_LEN);
+		if (ninfo)
+			return nodeid;
+	}
+	return -1;
+}
+
+static int _name_from_csid(const char *csid, char *name)
+{
+	struct node_info *ninfo;
+
+	ninfo = dm_hash_lookup_binary(node_hash, csid, COROSYNC_CSID_LEN);
+	if (!ninfo)
+	{
+		sprintf(name, "UNKNOWN %s", print_csid(csid));
+		return -1;
+	}
+
+	sprintf(name, "%x", ninfo->nodeid);
+	return 0;
+}
+
+static int _get_num_nodes()
+{
+	DEBUGLOG("num_nodes = %d\n", num_nodes);
+	return num_nodes;
+}
+
+/* Node is now known to be running a clvmd */
+static void _add_up_node(const char *csid)
+{
+	struct node_info *ninfo;
+
+	ninfo = dm_hash_lookup_binary(node_hash, csid, COROSYNC_CSID_LEN);
+	if (!ninfo) {
+		DEBUGLOG("corosync_add_up_node no node_hash entry for csid %s\n",
+			 print_csid(csid));
+		return;
+	}
+
+	DEBUGLOG("corosync_add_up_node %d\n", ninfo->nodeid);
+
+	ninfo->state = NODE_CLVMD;
+
+	return;
+}
+
+/* Call a callback for each node, so the caller knows whether it's up or down */
+static int _cluster_do_node_callback(struct local_client *master_client,
+				     void (*callback)(struct local_client *,
+						      const char *csid, int node_up))
+{
+	struct dm_hash_node *hn;
+	struct node_info *ninfo;
+	int somedown = 0;
+
+	dm_hash_iterate(hn, node_hash)
+	{
+		char csid[COROSYNC_CSID_LEN];
+
+		ninfo = dm_hash_get_data(node_hash, hn);
+		memcpy(csid, dm_hash_get_key(node_hash, hn), COROSYNC_CSID_LEN);
+
+		DEBUGLOG("down_callback. node %d, state = %d\n", ninfo->nodeid,
+			 ninfo->state);
+
+		if (ninfo->state != NODE_DOWN)
+			callback(master_client, csid, ninfo->state == NODE_CLVMD);
+		if (ninfo->state != NODE_CLVMD)
+			somedown = -1;
+	}
+	return somedown;
+}
+
+/* Real locking */
+static int _lock_resource(const char *resource, int mode, int flags, int *lockid)
+{
+	struct dlm_lksb lksb;
+	int err;
+
+	DEBUGLOG("lock_resource '%s', flags=%d, mode=%d\n", resource, flags, mode);
+
+	if (flags & LKF_CONVERT)
+		lksb.sb_lkid = *lockid;
+
+	err = dlm_ls_lock_wait(lockspace,
+			       mode,
+			       &lksb,
+			       flags,
+			       resource,
+			       strlen(resource),
+			       0,
+			       NULL, NULL, NULL);
+
+	if (err != 0)
+	{
+		DEBUGLOG("dlm_ls_lock returned %d\n", errno);
+		return err;
+	}
+
+	DEBUGLOG("lock_resource returning %d, lock_id=%x\n", err, lksb.sb_lkid);
+
+	*lockid = lksb.sb_lkid;
+
+	return 0;
+}
+
+
+static int _unlock_resource(const char *resource, int lockid)
+{
+	struct dlm_lksb lksb;
+	int err;
+
+	DEBUGLOG("unlock_resource: %s lockid: %x\n", resource, lockid);
+	lksb.sb_lkid = lockid;
+
+	err = dlm_ls_unlock_wait(lockspace,
+				 lockid,
+				 0,
+				 &lksb);
+	if (err != 0)
+	{
+		DEBUGLOG("Unlock returned %d\n", err);
+		return err;
+	}
+
+	return 0;
+}
+
+/* We are always quorate ! */
+static int _is_quorate()
+{
+	int quorate;
+	if (quorum_getquorate(quorum_handle, &quorate) == CS_OK)
+		return quorate;
+	else
+		return 0;
+}
+
+static int _get_main_cluster_fd(void)
+{
+	int select_fd;
+
+	cpg_fd_get(cpg_handle, &select_fd);
+	return select_fd;
+}
+
+static int _cluster_fd_callback(struct local_client *fd, char *buf, int len,
+				const char *csid,
+				struct local_client **new_client)
+{
+	cluster_client = fd;
+	*new_client = NULL;
+	cpg_dispatch(cpg_handle, CS_DISPATCH_ONE);
+	return 1;
+}
+
+static int _cluster_send_message(const void *buf, int msglen, const char *csid,
+				 const char *errtext)
+{
+	struct iovec iov[2];
+	cs_error_t err;
+	int target_node;
+
+	if (csid)
+		memcpy(&target_node, csid, COROSYNC_CSID_LEN);
+	else
+		target_node = 0;
+
+	iov[0].iov_base = &target_node;
+	iov[0].iov_len = sizeof(int);
+	iov[1].iov_base = (char *)buf;
+	iov[1].iov_len = msglen;
+
+	err = cpg_mcast_joined(cpg_handle, CPG_TYPE_AGREED, iov, 2);
+	return cs_to_errno(err);
+}
+
+/* We don't have a cluster name to report here */
+static int _get_cluster_name(char *buf, int buflen)
+{
+	strncpy(buf, "Corosync", buflen);
+	return 0;
+}
+
+static struct cluster_ops _cluster_corosync_ops = {
+	.cluster_init_completed   = NULL,
+	.cluster_send_message     = _cluster_send_message,
+	.name_from_csid           = _name_from_csid,
+	.csid_from_name           = _csid_from_name,
+	.get_num_nodes            = _get_num_nodes,
+	.cluster_fd_callback      = _cluster_fd_callback,
+	.get_main_cluster_fd      = _get_main_cluster_fd,
+	.cluster_do_node_callback = _cluster_do_node_callback,
+	.is_quorate               = _is_quorate,
+	.get_our_csid             = _get_our_csid,
+	.add_up_node              = _add_up_node,
+	.reread_config            = NULL,
+	.cluster_closedown        = _cluster_closedown,
+	.get_cluster_name         = _get_cluster_name,
+	.sync_lock                = _lock_resource,
+	.sync_unlock              = _unlock_resource,
+};
+
+struct cluster_ops *init_corosync_cluster(void)
+{
+	if (!_init_cluster())
+		return &_cluster_corosync_ops;
+	else
+		return NULL;
+}
--- LVM2/daemons/clvmd/Makefile.in	2008/10/07 19:11:59	1.24
+++ LVM2/daemons/clvmd/Makefile.in	2009/01/22 10:21:12	1.25
@@ -39,6 +39,14 @@
 	GULM = yes
 	CMAN = yes
 	OPENAIS = no
+	COROSYNC = no
+endif
+
+ifeq ("@CLVMD@", "corosync")
+	GULM = no
+	CMAN = no
+	OPENAIS = no
+	COROSYNC = yes
 endif
 
 ifeq ("@DEBUG@", "yes")
@@ -63,6 +71,13 @@
 	DEFS += -DUSE_OPENAIS
 endif
 
+ifeq ("$(COROSYNC)", "yes")
+        SOURCES += clvmd-corosync.c
+        LMLIBS += -lquorum -lcpg -ldlm
+        DEFS += -DUSE_COROSYNC
+endif
+
+
 TARGETS = \
 	clvmd
 
--- LVM2/daemons/clvmd/clvmd-comms.h	2007/05/21 10:52:01	1.8
+++ LVM2/daemons/clvmd/clvmd-comms.h	2009/01/22 10:21:12	1.9
@@ -93,5 +93,22 @@
 struct cluster_ops *init_openais_cluster(void);
 #endif
 
+#ifdef USE_COROSYNC
+#  include <corosync/corotypes.h>
+#  define COROSYNC_CSID_LEN (sizeof(int))
+#  define COROSYNC_MAX_CLUSTER_MESSAGE         65535
+#  define COROSYNC_MAX_CLUSTER_MEMBER_NAME_LEN CS_MAX_NAME_LENGTH
+#  ifndef MAX_CLUSTER_MEMBER_NAME_LEN
+#    define MAX_CLUSTER_MEMBER_NAME_LEN       CS_MAX_NAME_LENGTH
+#  endif
+#  ifndef CMAN_MAX_CLUSTER_MESSAGE
+#    define CMAN_MAX_CLUSTER_MESSAGE          65535
+#  endif
+#  ifndef MAX_CSID_LEN
+#    define MAX_CSID_LEN sizeof(int)
+#  endif
+struct cluster_ops *init_corosync_cluster(void);
+#endif
+
 
 #endif
--- LVM2/daemons/clvmd/clvmd.c	2008/11/21 13:48:00	1.52
+++ LVM2/daemons/clvmd/clvmd.c	2009/01/22 10:21:12	1.53
@@ -391,6 +391,15 @@
 			syslog(LOG_NOTICE, "Cluster LVM daemon started - connected to OpenAIS");
 		}
 #endif
+#ifdef USE_COROSYNC
+	if (!clops)
+		if ((clops = init_corosync_cluster())) {
+			max_csid_len = COROSYNC_CSID_LEN;
+			max_cluster_message = COROSYNC_MAX_CLUSTER_MESSAGE;
+			max_cluster_member_name_len = COROSYNC_MAX_CLUSTER_MEMBER_NAME_LEN;
+			syslog(LOG_NOTICE, "Cluster LVM daemon started - connected to Corosync");
+		}
+#endif
 
 	if (!clops) {
 		DEBUGLOG("Can't initialise cluster interface\n");


^ permalink raw reply	[flat|nested] 9+ messages in thread

* LVM2 ./WHATS_NEW daemons/clvmd/Makefile.in dae ...
@ 2007-06-25  9:02 pcaulfield
  0 siblings, 0 replies; 9+ messages in thread
From: pcaulfield @ 2007-06-25  9:02 UTC (permalink / raw)
  To: lvm-devel, lvm2-cvs

CVSROOT:	/cvs/lvm2
Module name:	LVM2
Changes by:	pcaulfield@sourceware.org	2007-06-25 09:02:37

Modified files:
	.              : WHATS_NEW 
	daemons/clvmd  : Makefile.in clvmd-openais.c 

Log message:
	Use cpg_local_get() rather then Clm to get the local nodeid.

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/WHATS_NEW.diff?cvsroot=lvm2&r1=1.639&r2=1.640
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/Makefile.in.diff?cvsroot=lvm2&r1=1.19&r2=1.20
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/clvmd-openais.c.diff?cvsroot=lvm2&r1=1.1&r2=1.2

--- LVM2/WHATS_NEW	2007/06/19 10:51:51	1.639
+++ LVM2/WHATS_NEW	2007/06/25 09:02:37	1.640
@@ -4,6 +4,7 @@
   Add vg_status function and clean up vg->status in tools directory.
   Add --ignoremonitoring to disable all dmeventd interaction.
   Remove get_ prefix from get_pv_* functions.
+  clvmd-openais now uses cpg_local_get() to get nodeid, rather than Clm.
 
 Version 2.02.26 - 15th June 2007
 ================================
--- LVM2/daemons/clvmd/Makefile.in	2007/06/14 10:16:34	1.19
+++ LVM2/daemons/clvmd/Makefile.in	2007/06/25 09:02:37	1.20
@@ -59,7 +59,7 @@
 
 ifeq ("$(OPENAIS)", "yes")
 	SOURCES += clvmd-openais.c
-	LMLIBS += -lSaLck -lSaClm -lcpg
+	LMLIBS += -lSaLck -lcpg
 	DEFS += -DUSE_OPENAIS
 endif
 
--- LVM2/daemons/clvmd/clvmd-openais.c	2007/05/21 10:52:01	1.1
+++ LVM2/daemons/clvmd/clvmd-openais.c	2007/06/25 09:02:37	1.2
@@ -98,9 +98,6 @@
         .saLckResourceUnlockCallback = lck_unlock_callback
 };
 
-/* We only call Clm to get our node id */
-SaClmCallbacksT clm_callbacks;
-
 struct node_info
 {
 	enum {NODE_UNKNOWN, NODE_DOWN, NODE_UP, NODE_CLVMD} state;
@@ -348,7 +345,6 @@
 {
 	SaAisErrorT err;
 	SaVersionT  ver = { 'B', 1, 1 };
-	SaClmHandleT clm_handle;
 	int select_fd;
 	SaClmClusterNodeT cluster_node;
 
@@ -387,26 +383,14 @@
 		return ais_to_errno(err);
 	}
 
-	/* A brief foray into Clm to get our node id */
-	err = saClmInitialize(&clm_handle, &clm_callbacks, &ver);
-	if (err != SA_AIS_OK) {
-		syslog(LOG_ERR, "Could not initialize OpenAIS membership service %d\n", err);
-		DEBUGLOG("Could not initialize OpenAIS Membership service %d\n", err);
-		return ais_to_errno(err);
-	}
-
-	err = saClmClusterNodeGet(clm_handle,
-				  SA_CLM_LOCAL_NODE_ID,
-				  TIMEOUT,
-				  &cluster_node);
+	err = cpg_local_get(cpg_handle,
+			    &cluster_node);
 	if (err != SA_AIS_OK) {
 		cpg_finalize(cpg_handle);
 		saLckFinalize(lck_handle);
-		saClmFinalize(clm_handle);
 		syslog(LOG_ERR, "Cannot get local node id\n");
 		return ais_to_errno(err);
 	}
-	saClmFinalize(clm_handle);
 	our_nodeid = cluster_node.nodeId;
 	DEBUGLOG("Our local node id is %d\n", our_nodeid);
 
@@ -424,7 +408,7 @@
 	unlock_all();
 
 	saLckFinalize(lck_handle);
-	cpg_inalize(cpg_handle);
+	cpg_finalize(cpg_handle);
 }
 
 static void _get_our_csid(char *csid)


^ permalink raw reply	[flat|nested] 9+ messages in thread

* LVM2 ./WHATS_NEW daemons/clvmd/Makefile.in dae ...
@ 2007-06-14 10:16 pcaulfield
  0 siblings, 0 replies; 9+ messages in thread
From: pcaulfield @ 2007-06-14 10:16 UTC (permalink / raw)
  To: lvm-devel, lvm2-cvs

CVSROOT:	/cvs/lvm2
Module name:	LVM2
Changes by:	pcaulfield@sourceware.org	2007-06-14 10:16:35

Modified files:
	.              : WHATS_NEW 
	daemons/clvmd  : Makefile.in clvmd.c 

Log message:
	Remove system LV code from clvmd. It's never been used and never should be
	used! It's removal tidies a number of code paths inside clvmd.

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/WHATS_NEW.diff?cvsroot=lvm2&r1=1.631&r2=1.632
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/Makefile.in.diff?cvsroot=lvm2&r1=1.18&r2=1.19
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/clvmd.c.diff?cvsroot=lvm2&r1=1.37&r2=1.38

--- LVM2/WHATS_NEW	2007/06/13 23:57:15	1.631
+++ LVM2/WHATS_NEW	2007/06/14 10:16:34	1.632
@@ -1,5 +1,6 @@
 Version 2.02.26 -
 =================================
+  Remove system-lv code from clvmd. It's highly dodgy and never used.
   Convert a lot of code pv dereferences to use get_pv_* functions.
   Suppress a couple benign warnings by adding variable initializations.
   Convert find_pv_in_vg_by_uuid and pv_create to use PV handles.
--- LVM2/daemons/clvmd/Makefile.in	2007/05/21 10:52:01	1.18
+++ LVM2/daemons/clvmd/Makefile.in	2007/06/14 10:16:34	1.19
@@ -19,8 +19,7 @@
 	clvmd-command.c  \
 	clvmd.c          \
 	lvm-functions.c  \
-	refresh_clvmd.c \
-	system-lv.c
+	refresh_clvmd.c
 
 ifeq ("@CLVMD@", "gulm")
 	GULM = yes
--- LVM2/daemons/clvmd/clvmd.c	2007/05/21 10:52:01	1.37
+++ LVM2/daemons/clvmd/clvmd.c	2007/06/14 10:16:35	1.38
@@ -45,7 +45,6 @@
 #include "version.h"
 #include "clvmd.h"
 #include "refresh_clvmd.h"
-#include "system-lv.h"
 #include "list.h"
 #include "log.h"
 
@@ -58,10 +57,6 @@
 
 #define MAX_RETRIES 4
 
-/* The maximum size of a message that will fit into a packet. Anything bigger
-   than this is sent via the system LV */
-#define MAX_INLINE_MESSAGE (max_cluster_message-sizeof(struct clvm_header))
-
 #define ISLOCAL_CSID(c) (memcmp(c, our_csid, max_csid_len) == 0)
 
 /* Head of the fd list. Also contains
@@ -1062,31 +1057,6 @@
 	return 0;
 }
 
-
-/*
- * Send a long message using the System LV
- */
-static int send_long_message(struct local_client *thisfd, struct clvm_header *inheader, int len)
-{
-    struct clvm_header new_header;
-    int status;
-
-    DEBUGLOG("Long message: being sent via system LV:\n");
-
-    /* Use System LV */
-    status = system_lv_write_data((char *)inheader, len);
-    if (status < 0)
-	    return errno;
-
-    /* Send message indicating System-LV is being used */
-    memcpy(&new_header, inheader, sizeof(new_header));
-    new_header.flags |= CLVMD_FLAG_SYSTEMLV;
-    new_header.xid = thisfd->xid;
-
-    return send_message(&new_header, sizeof(new_header), NULL, -1,
-		 "Error forwarding long message to cluster");
-}
-
 /* Called when the pre-command has completed successfully - we
    now execute the real command on all the requested nodes */
 static int distribute_command(struct local_client *thisfd)
@@ -1113,13 +1083,9 @@
 			add_to_lvmqueue(thisfd, inheader, len, NULL);
 
 			DEBUGLOG("Sending message to all cluster nodes\n");
-			if (len > MAX_INLINE_MESSAGE) {
-			        send_long_message(thisfd, inheader, len );
-			} else {
-				inheader->xid = thisfd->xid;
-				send_message(inheader, len, NULL, -1,
-					     "Error forwarding message to cluster");
-			}
+			inheader->xid = thisfd->xid;
+			send_message(inheader, len, NULL, -1,
+				     "Error forwarding message to cluster");
 		} else {
                         /* Do it on a single node */
 			char csid[MAX_CSID_LEN];
@@ -1140,14 +1106,10 @@
 				} else {
 					DEBUGLOG("Sending message to single node: %s\n",
 						 inheader->node);
-					if (len > MAX_INLINE_MESSAGE) {
-					        send_long_message(thisfd, inheader, len );
-					} else {
-						inheader->xid = thisfd->xid;
-						send_message(inheader, len,
-							     csid, -1,
-							     "Error forwarding message to cluster node");
-					}
+					inheader->xid = thisfd->xid;
+					send_message(inheader, len,
+						     csid, -1,
+						     "Error forwarding message to cluster node");
 				}
 			}
 		}
@@ -1178,55 +1140,6 @@
 	DEBUGLOG("process_remote_command %d for clientid 0x%x XID %d on node %s\n",
 		 msg->cmd, msg->clientid, msg->xid, nodename);
 
-	/* Is the data to be found in the system LV ? */
-	if (msg->flags & CLVMD_FLAG_SYSTEMLV) {
-		struct clvm_header *newmsg;
-
-		DEBUGLOG("Reading message from system LV\n");
-		newmsg =
-		    (struct clvm_header *) malloc(msg->arglen +
-						  sizeof(struct clvm_header));
-		if (newmsg) {
-			ssize_t len;
-			if (system_lv_read_data(nodename, (char *) newmsg,
-			     			&len) == 0) {
-				msg = newmsg;
-				msg_malloced = 1;
-				msglen = len;
-			} else {
-				struct clvm_header head;
-				DEBUGLOG("System LV read failed\n");
-
-				/* Return a failure response */
-				head.cmd = CLVMD_CMD_REPLY;
-				head.status = EFBIG;
-				head.flags = 0;
-				head.clientid = msg->clientid;
-				head.arglen = 0;
-				head.node[0] = '\0';
-				send_message(&head, sizeof(struct clvm_header),
-					     csid, fd,
-					     "Error sending ENOMEM command reply");
-				return;
-			}
-		} else {
-			struct clvm_header head;
-			DEBUGLOG
-			    ("Error attempting to malloc %d bytes for system LV read\n",
-			     msg->arglen);
-			/* Return a failure response */
-			head.cmd = CLVMD_CMD_REPLY;
-			head.status = ENOMEM;
-			head.flags = 0;
-			head.clientid = msg->clientid;
-			head.arglen = 0;
-			head.node[0] = '\0';
-			send_message(&head, sizeof(struct clvm_header), csid,
-				     fd, "Error sending ENOMEM command reply");
-			return;
-		}
-	}
-
 	/* Check for GOAWAY and sulk */
 	if (msg->cmd == CLVMD_CMD_GOAWAY) {
 
@@ -1301,40 +1214,16 @@
 				replyargs, replylen);
 
 			agghead->xid = msg->xid;
-
-			/* Use the system LV ? */
-			if (replylen > MAX_INLINE_MESSAGE) {
-				agghead->cmd = CLVMD_CMD_REPLY;
-				agghead->status = status;
-				agghead->flags = CLVMD_FLAG_SYSTEMLV;
-				agghead->clientid = msg->clientid;
-				agghead->arglen = replylen;
-				agghead->node[0] = '\0';
-
-				/* If System LV operation failed then report it as EFBIG but only do it
-				   if the data buffer has something in it. */
-				if (system_lv_write_data(aggreply,
-							 replylen + sizeof(struct clvm_header)) < 0
-				    && replylen > 0)
-					agghead->status = EFBIG;
-
-				send_message(agghead,
-					     sizeof(struct clvm_header), csid,
-					     fd,
-					     "Error sending long command reply");
-
-			} else {
-				agghead->cmd = CLVMD_CMD_REPLY;
-				agghead->status = status;
-				agghead->flags = 0;
-				agghead->clientid = msg->clientid;
-				agghead->arglen = replylen;
-				agghead->node[0] = '\0';
-				send_message(aggreply,
-					     sizeof(struct clvm_header) +
-					     replylen, csid, fd,
-					     "Error sending command reply");
-			}
+			agghead->cmd = CLVMD_CMD_REPLY;
+			agghead->status = status;
+			agghead->flags = 0;
+			agghead->clientid = msg->clientid;
+			agghead->arglen = replylen;
+			agghead->node[0] = '\0';
+			send_message(aggreply,
+				     sizeof(struct clvm_header) +
+				     replylen, csid, fd,
+				     "Error sending command reply");
 		} else {
 			struct clvm_header head;
 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* LVM2 ./WHATS_NEW daemons/clvmd/Makefile.in dae ...
@ 2007-05-21 10:52 pcaulfield
  0 siblings, 0 replies; 9+ messages in thread
From: pcaulfield @ 2007-05-21 10:52 UTC (permalink / raw)
  To: lvm-devel, lvm2-cvs

CVSROOT:	/cvs/lvm2
Module name:	LVM2
Changes by:	pcaulfield@sourceware.org	2007-05-21 10:52:01

Modified files:
	.              : WHATS_NEW 
	daemons/clvmd  : Makefile.in clvmd-comms.h clvmd.c 
Added files:
	daemons/clvmd  : clvmd-openais.c 

Log message:
	Add *Experimental* OpenAIS support to clvmd.

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/WHATS_NEW.diff?cvsroot=lvm2&r1=1.618&r2=1.619
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/clvmd-openais.c.diff?cvsroot=lvm2&r1=NONE&r2=1.1
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/Makefile.in.diff?cvsroot=lvm2&r1=1.17&r2=1.18
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/clvmd-comms.h.diff?cvsroot=lvm2&r1=1.7&r2=1.8
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/clvmd.c.diff?cvsroot=lvm2&r1=1.36&r2=1.37

--- LVM2/WHATS_NEW	2007/05/15 14:42:01	1.618
+++ LVM2/WHATS_NEW	2007/05/21 10:52:01	1.619
@@ -1,5 +1,6 @@
 Version 2.02.26 -
 =================================
+  Add (experimental) OpenAIS support to clvmd.
   Remove symlinks if parent volume is deactivated.
   Fix and clarify vgsplit error messages.
   Fix a segfault if a device has no target (no table)
/cvs/lvm2/LVM2/daemons/clvmd/clvmd-openais.c,v  -->  standard output
revision 1.1
--- LVM2/daemons/clvmd/clvmd-openais.c
+++ -	2007-05-21 10:52:01.972639000 +0000
@@ -0,0 +1,756 @@
+/******************************************************************************
+*******************************************************************************
+**
+**  Copyright (C) 2007 Red Hat, Inc. All rights reserved.
+**
+*******************************************************************************
+******************************************************************************/
+
+/* This provides the interface between clvmd and OpenAIS as the cluster
+ * and lock manager.
+ *
+ */
+
+#include <pthread.h>
+#include <sys/types.h>
+#include <sys/utsname.h>
+#include <sys/ioctl.h>
+#include <sys/socket.h>
+#include <sys/stat.h>
+#include <sys/file.h>
+#include <sys/socket.h>
+#include <netinet/in.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <signal.h>
+#include <fcntl.h>
+#include <string.h>
+#include <stddef.h>
+#include <stdint.h>
+#include <unistd.h>
+#include <errno.h>
+#include <utmpx.h>
+#include <syslog.h>
+#include <assert.h>
+#include <libdevmapper.h>
+
+#include <openais/saAis.h>
+#include <openais/saLck.h>
+#include <openais/saClm.h>
+#include <openais/cpg.h>
+
+#include "list.h"
+#include "locking.h"
+#include "log.h"
+#include "clvm.h"
+#include "clvmd-comms.h"
+#include "lvm-functions.h"
+#include "clvmd.h"
+
+/* Timeout value for several openais calls */
+#define TIMEOUT 10
+
+static void lck_lock_callback(SaInvocationT invocation,
+			      SaLckLockStatusT lockStatus,
+			      SaAisErrorT error);
+static void lck_unlock_callback(SaInvocationT invocation,
+				SaAisErrorT error);
+static void cpg_deliver_callback (cpg_handle_t handle,
+				  struct cpg_name *groupName,
+				  uint32_t nodeid,
+				  uint32_t pid,
+				  void *msg,
+				  int msg_len);
+static void cpg_confchg_callback(cpg_handle_t handle,
+				 struct cpg_name *groupName,
+				 struct cpg_address *member_list, int member_list_entries,
+				 struct cpg_address *left_list, int left_list_entries,
+				 struct cpg_address *joined_list, int joined_list_entries);
+static void _cluster_closedown(void);
+
+/* Hash list of nodes in the cluster */
+static struct dm_hash_table *node_hash;
+
+/* For associating lock IDs & resource handles */
+static struct dm_hash_table *lock_hash;
+
+/* Number of active nodes */
+static int num_nodes;
+static unsigned int our_nodeid;
+
+static struct local_client *cluster_client;
+
+/* OpenAIS handles */
+static cpg_handle_t cpg_handle;
+static SaLckHandleT lck_handle;
+
+static struct cpg_name cpg_group_name;
+
+/* Openais callback structs */
+cpg_callbacks_t cpg_callbacks = {
+	.cpg_deliver_fn =            cpg_deliver_callback,
+	.cpg_confchg_fn =            cpg_confchg_callback,
+};
+
+SaLckCallbacksT lck_callbacks = {
+        .saLckLockGrantCallback      = lck_lock_callback,
+        .saLckResourceUnlockCallback = lck_unlock_callback
+};
+
+/* We only call Clm to get our node id */
+SaClmCallbacksT clm_callbacks;
+
+struct node_info
+{
+	enum {NODE_UNKNOWN, NODE_DOWN, NODE_UP, NODE_CLVMD} state;
+	int nodeid;
+};
+
+struct lock_info
+{
+	SaLckResourceHandleT res_handle;
+	SaLckLockIdT         lock_id;
+	SaNameT              lock_name;
+};
+
+struct lock_wait
+{
+	pthread_cond_t cond;
+	pthread_mutex_t mutex;
+	int status;
+};
+
+/* Set errno to something approximating the right value and return 0 or -1 */
+static int ais_to_errno(SaAisErrorT err)
+{
+	switch(err)
+	{
+	case SA_AIS_OK:
+		return 0;
+        case SA_AIS_ERR_LIBRARY:
+		errno = EINVAL;
+		break;
+        case SA_AIS_ERR_VERSION:
+		errno = EINVAL;
+		break;
+        case SA_AIS_ERR_INIT:
+		errno = EINVAL;
+		break;
+        case SA_AIS_ERR_TIMEOUT:
+		errno = ETIME;
+		break;
+        case SA_AIS_ERR_TRY_AGAIN:
+		errno = EAGAIN;
+		break;
+        case SA_AIS_ERR_INVALID_PARAM:
+		errno = EINVAL;
+		break;
+        case SA_AIS_ERR_NO_MEMORY:
+		errno = ENOMEM;
+		break;
+        case SA_AIS_ERR_BAD_HANDLE:
+		errno = EINVAL;
+		break;
+        case SA_AIS_ERR_BUSY:
+		errno = EBUSY;
+		break;
+        case SA_AIS_ERR_ACCESS:
+		errno = EPERM;
+		break;
+        case SA_AIS_ERR_NOT_EXIST:
+		errno = ENOENT;
+		break;
+        case SA_AIS_ERR_NAME_TOO_LONG:
+		errno = ENAMETOOLONG;
+		break;
+        case SA_AIS_ERR_EXIST:
+		errno = EEXIST;
+		break;
+        case SA_AIS_ERR_NO_SPACE:
+		errno = ENOSPC;
+		break;
+        case SA_AIS_ERR_INTERRUPT:
+		errno = EINTR;
+		break;
+	case SA_AIS_ERR_NAME_NOT_FOUND:
+		errno = ENOENT;
+		break;
+        case SA_AIS_ERR_NO_RESOURCES:
+		errno = ENOMEM;
+		break;
+        case SA_AIS_ERR_NOT_SUPPORTED:
+		errno = EOPNOTSUPP;
+		break;
+        case SA_AIS_ERR_BAD_OPERATION:
+		errno = EINVAL;
+		break;
+        case SA_AIS_ERR_FAILED_OPERATION:
+		errno = EIO;
+		break;
+        case SA_AIS_ERR_MESSAGE_ERROR:
+		errno = EIO;
+		break;
+        case SA_AIS_ERR_QUEUE_FULL:
+		errno = EXFULL;
+		break;
+        case SA_AIS_ERR_QUEUE_NOT_AVAILABLE:
+		errno = EINVAL;
+		break;
+        case SA_AIS_ERR_BAD_FLAGS:
+		errno = EINVAL;
+		break;
+        case SA_AIS_ERR_TOO_BIG:
+		errno = E2BIG;
+		break;
+        case SA_AIS_ERR_NO_SECTIONS:
+		errno = ENOMEM;
+		break;
+	default:
+		errno = EINVAL;
+		break;
+	}
+	return -1;
+}
+
+static char *print_csid(const char *csid)
+{
+	static char buf[128];
+	int id;
+
+	memcpy(&id, csid, sizeof(int));
+	sprintf(buf, "%d", id);
+	return buf;
+}
+
+static int add_internal_client(int fd, fd_callback_t callback)
+{
+	struct local_client *client;
+
+	DEBUGLOG("Add_internal_client, fd = %d\n", fd);
+
+	client = malloc(sizeof(struct local_client));
+	if (!client)
+	{
+		DEBUGLOG("malloc failed\n");
+		return -1;
+	}
+
+	memset(client, 0, sizeof(struct local_client));
+	client->fd = fd;
+	client->type = CLUSTER_INTERNAL;
+	client->callback = callback;
+	add_client(client);
+
+	/* Set Close-on-exec */
+	fcntl(fd, F_SETFD, 1);
+
+	return 0;
+}
+
+static void cpg_deliver_callback (cpg_handle_t handle,
+				  struct cpg_name *groupName,
+				  uint32_t nodeid,
+				  uint32_t pid,
+				  void *msg,
+				  int msg_len)
+{
+	int target_nodeid;
+
+	memcpy(&target_nodeid, msg, OPENAIS_CSID_LEN);
+
+	DEBUGLOG("Got message from nodeid %d for %d. len %d\n",
+		 nodeid, target_nodeid, msg_len-4);
+
+	if (target_nodeid == our_nodeid)
+		process_message(cluster_client, (char *)msg+OPENAIS_CSID_LEN,
+				msg_len-OPENAIS_CSID_LEN, (char*)&nodeid);
+}
+
+static void cpg_confchg_callback(cpg_handle_t handle,
+				 struct cpg_name *groupName,
+				 struct cpg_address *member_list, int member_list_entries,
+				 struct cpg_address *left_list, int left_list_entries,
+				 struct cpg_address *joined_list, int joined_list_entries)
+{
+	int i;
+	struct node_info *ninfo;
+
+	DEBUGLOG("confchg callback. %d joined, %d left, %d members\n",
+		 joined_list_entries, left_list_entries, member_list_entries);
+
+	for (i=0; i<joined_list_entries; i++) {
+		ninfo = dm_hash_lookup_binary(node_hash,
+					      (char *)&joined_list[i].nodeid,
+					      OPENAIS_CSID_LEN);
+		if (!ninfo) {
+			ninfo = malloc(sizeof(struct node_info));
+			if (!ninfo) {
+				break;
+			}
+			else {
+				ninfo->nodeid = joined_list[i].nodeid;
+				dm_hash_insert_binary(node_hash,
+						      (char *)&ninfo->nodeid,
+						      OPENAIS_CSID_LEN, ninfo);
+			}
+		}
+		ninfo->state = NODE_CLVMD;
+	}
+
+	for (i=0; i<left_list_entries; i++) {
+		ninfo = dm_hash_lookup_binary(node_hash,
+					      (char *)&left_list[i].nodeid,
+					      OPENAIS_CSID_LEN);
+		if (ninfo)
+			ninfo->state = NODE_DOWN;
+	}
+
+	num_nodes = joined_list_entries;
+}
+
+static void lck_lock_callback(SaInvocationT invocation,
+			      SaLckLockStatusT lockStatus,
+			      SaAisErrorT error)
+{
+	struct lock_wait *lwait = (struct lock_wait *)(long)invocation;
+
+	DEBUGLOG("lck_lock_callback, error = %d\n", error);
+
+	lwait->status = error;
+	pthread_mutex_lock(&lwait->mutex);
+	pthread_cond_signal(&lwait->cond);
+	pthread_mutex_unlock(&lwait->mutex);
+}
+
+static void lck_unlock_callback(SaInvocationT invocation,
+				SaAisErrorT error)
+{
+	struct lock_wait *lwait = (struct lock_wait *)(long)invocation;
+
+	DEBUGLOG("lck_unlock_callback\n");
+
+	lwait->status = SA_AIS_OK;
+	pthread_mutex_lock(&lwait->mutex);
+	pthread_cond_signal(&lwait->cond);
+	pthread_mutex_unlock(&lwait->mutex);
+}
+
+static int lck_dispatch(struct local_client *client, char *buf, int len,
+			const char *csid, struct local_client **new_client)
+{
+	*new_client = NULL;
+	saLckDispatch(lck_handle, SA_DISPATCH_ONE);
+	return 1;
+}
+
+static int _init_cluster(void)
+{
+	SaAisErrorT err;
+	SaVersionT  ver = { 'B', 1, 1 };
+	SaClmHandleT clm_handle;
+	int select_fd;
+	SaClmClusterNodeT cluster_node;
+
+	node_hash = dm_hash_create(100);
+	lock_hash = dm_hash_create(10);
+
+	err = cpg_initialize(&cpg_handle,
+			     &cpg_callbacks);
+	if (err != SA_AIS_OK) {
+		syslog(LOG_ERR, "Cannot initialise OpenAIS CPG service: %d",
+		       err);
+		DEBUGLOG("Cannot initialise OpenAIS CPG service: %d", err);
+		return ais_to_errno(err);
+	}
+
+	err = saLckInitialize(&lck_handle,
+			      &lck_callbacks,
+			      &ver);
+	if (err != SA_AIS_OK) {
+		cpg_initialize(&cpg_handle, &cpg_callbacks);
+		syslog(LOG_ERR, "Cannot initialise OpenAIS lock service: %d",
+		       err);
+		DEBUGLOG("Cannot initialise OpenAIS lock service: %d\n\n", err);
+		return ais_to_errno(err);
+	}
+
+	/* Connect to the clvmd group */
+	strcpy((char *)cpg_group_name.value, "clvmd");
+	cpg_group_name.length = strlen((char *)cpg_group_name.value);
+	err = cpg_join(cpg_handle, &cpg_group_name);
+	if (err != SA_AIS_OK) {
+		cpg_finalize(cpg_handle);
+		saLckFinalize(lck_handle);
+		syslog(LOG_ERR, "Cannot join clvmd process group");
+		DEBUGLOG("Cannot join clvmd process group\n");
+		return ais_to_errno(err);
+	}
+
+	/* A brief foray into Clm to get our node id */
+	err = saClmInitialize(&clm_handle, &clm_callbacks, &ver);
+	if (err != SA_AIS_OK) {
+		syslog(LOG_ERR, "Could not initialize OpenAIS membership service %d\n", err);
+		DEBUGLOG("Could not initialize OpenAIS Membership service %d\n", err);
+		return ais_to_errno(err);
+	}
+
+	err = saClmClusterNodeGet(clm_handle,
+				  SA_CLM_LOCAL_NODE_ID,
+				  TIMEOUT,
+				  &cluster_node);
+	if (err != SA_AIS_OK) {
+		cpg_finalize(cpg_handle);
+		saLckFinalize(lck_handle);
+		saClmFinalize(clm_handle);
+		syslog(LOG_ERR, "Cannot get local node id\n");
+		return ais_to_errno(err);
+	}
+	saClmFinalize(clm_handle);
+	our_nodeid = cluster_node.nodeId;
+	DEBUGLOG("Our local node id is %d\n", our_nodeid);
+
+	saLckSelectionObjectGet(lck_handle, (SaSelectionObjectT *)&select_fd);
+	add_internal_client(select_fd, lck_dispatch);
+
+	DEBUGLOG("Connected to OpenAIS\n");
+
+	return 0;
+}
+
+static void _cluster_closedown(void)
+{
+	DEBUGLOG("cluster_closedown\n");
+	unlock_all();
+
+	saLckFinalize(lck_handle);
+	cpg_inalize(cpg_handle);
+}
+
+static void _get_our_csid(char *csid)
+{
+	memcpy(csid, &our_nodeid, sizeof(int));
+}
+
+/* OpenAIS doesn't really have nmode names so we
+   just use the node ID in hex instead */
+static int _csid_from_name(char *csid, const char *name)
+{
+	int nodeid;
+	struct node_info *ninfo;
+
+	if (sscanf(name, "%x", &nodeid) == 1) {
+		ninfo = dm_hash_lookup_binary(node_hash, csid, OPENAIS_CSID_LEN);
+		if (ninfo)
+			return nodeid;
+	}
+	return -1;
+}
+
+static int _name_from_csid(const char *csid, char *name)
+{
+	struct node_info *ninfo;
+
+	ninfo = dm_hash_lookup_binary(node_hash, csid, OPENAIS_CSID_LEN);
+	if (!ninfo)
+	{
+		sprintf(name, "UNKNOWN %s", print_csid(csid));
+		return -1;
+	}
+
+	sprintf(name, "%x", ninfo->nodeid);
+	return 0;
+}
+
+static int _get_num_nodes()
+{
+	DEBUGLOG("num_nodes = %d\n", num_nodes);
+	return num_nodes;
+}
+
+/* Node is now known to be running a clvmd */
+static void _add_up_node(const char *csid)
+{
+	struct node_info *ninfo;
+
+	ninfo = dm_hash_lookup_binary(node_hash, csid, OPENAIS_CSID_LEN);
+	if (!ninfo) {
+		DEBUGLOG("openais_add_up_node no node_hash entry for csid %s\n",
+			 print_csid(csid));
+		return;
+	}
+
+	DEBUGLOG("openais_add_up_node %d\n", ninfo->nodeid);
+
+	ninfo->state = NODE_CLVMD;
+
+	return;
+}
+
+/* Call a callback for each node, so the caller knows whether it's up or down */
+static int _cluster_do_node_callback(struct local_client *master_client,
+				     void (*callback)(struct local_client *,
+						      const char *csid, int node_up))
+{
+	struct dm_hash_node *hn;
+	struct node_info *ninfo;
+
+	dm_hash_iterate(hn, node_hash)
+	{
+		char csid[OPENAIS_CSID_LEN];
+
+		ninfo = dm_hash_get_data(node_hash, hn);
+		memcpy(csid, dm_hash_get_key(node_hash, hn), OPENAIS_CSID_LEN);
+
+		DEBUGLOG("down_callback. node %d, state = %d\n", ninfo->nodeid,
+			 ninfo->state);
+
+		if (ninfo->state != NODE_DOWN)
+			callback(master_client, csid, ninfo->state == NODE_CLVMD);
+	}
+	return 0;
+}
+
+/* Real locking */
+static int _lock_resource(char *resource, int mode, int flags, int *lockid)
+{
+	struct lock_wait lwait;
+	struct lock_info *linfo;
+	SaLckResourceHandleT res_handle;
+	SaAisErrorT err;
+	SaLckLockIdT lock_id;
+
+	pthread_cond_init(&lwait.cond, NULL);
+	pthread_mutex_init(&lwait.mutex, NULL);
+	pthread_mutex_lock(&lwait.mutex);
+
+	/* This needs to be converted from DLM/LVM2 value for OpenAIS LCK */
+	if (flags & LCK_NONBLOCK) flags = SA_LCK_LOCK_NO_QUEUE;
+
+	linfo = malloc(sizeof(struct lock_info));
+	if (!linfo)
+		return -1;
+
+	DEBUGLOG("lock_resource '%s', flags=%d, mode=%d\n", resource, flags, mode);
+
+	linfo->lock_name.length = strlen(resource)+1;
+	strcpy((char *)linfo->lock_name.value, resource);
+
+	err = saLckResourceOpen(lck_handle, &linfo->lock_name,
+				SA_LCK_RESOURCE_CREATE, TIMEOUT, &res_handle);
+	if (err != SA_AIS_OK)
+	{
+		DEBUGLOG("ResourceOpen returned %d\n", err);
+		free(linfo);
+		return ais_to_errno(err);
+	}
+
+	err = saLckResourceLockAsync(res_handle,
+				     (SaInvocationT)(long)&lwait,
+				     &lock_id,
+				     mode,
+				     flags,
+				     0);
+	if (err != SA_AIS_OK)
+	{
+		free(linfo);
+		saLckResourceClose(res_handle);
+		return ais_to_errno(err);
+	}
+
+	/* Wait for it to complete */
+	pthread_cond_wait(&lwait.cond, &lwait.mutex);
+	pthread_mutex_unlock(&lwait.mutex);
+
+	DEBUGLOG("lock_resource returning %d, lock_id=%llx\n", lwait.status,
+		 lock_id);
+
+	linfo->lock_id = lock_id;
+	linfo->res_handle = res_handle;
+
+	dm_hash_insert(lock_hash, resource, linfo);
+
+	return ais_to_errno(lwait.status);
+}
+
+
+static int _unlock_resource(char *resource, int lockid)
+{
+	struct lock_wait lwait;
+	SaAisErrorT err;
+	struct lock_info *linfo;
+
+	pthread_cond_init(&lwait.cond, NULL);
+	pthread_mutex_init(&lwait.mutex, NULL);
+	pthread_mutex_lock(&lwait.mutex);
+
+	DEBUGLOG("unlock_resource %s\n", resource);
+	linfo = dm_hash_lookup(lock_hash, resource);
+	if (!linfo)
+		return 0;
+
+	DEBUGLOG("unlock_resource: lockid: %llx\n", linfo->lock_id);
+	err = saLckResourceUnlockAsync((SaInvocationT)(long)&lwait, linfo->lock_id);
+	if (err != SA_AIS_OK)
+	{
+		DEBUGLOG("Unlock returned %d\n", err);
+		return ais_to_errno(err);
+	}
+
+	/* Wait for it to complete */
+	pthread_cond_wait(&lwait.cond, &lwait.mutex);
+	pthread_mutex_unlock(&lwait.mutex);
+
+	/* Release the resource */
+	dm_hash_remove(lock_hash, resource);
+	saLckResourceClose(linfo->res_handle);
+	free(linfo);
+
+	return ais_to_errno(lwait.status);
+}
+
+static int _sync_lock(const char *resource, int mode, int flags, int *lockid)
+{
+	int status;
+	char lock1[strlen(resource)+3];
+	char lock2[strlen(resource)+3];
+
+	snprintf(lock1, sizeof(lock1), "%s-1", resource);
+	snprintf(lock2, sizeof(lock2), "%s-2", resource);
+
+	switch (mode)
+	{
+	case LCK_EXCL:
+		status = _lock_resource(lock1, SA_LCK_EX_LOCK_MODE, flags, lockid);
+		if (status)
+			goto out;
+
+		/* If we can't get this lock too then bail out */
+		status = _lock_resource(lock2, SA_LCK_EX_LOCK_MODE, LCK_NONBLOCK,
+					lockid);
+		if (status == SA_LCK_LOCK_NOT_QUEUED)
+		{
+			_unlock_resource(lock1, *lockid);
+			status = -1;
+			errno = EAGAIN;
+		}
+		break;
+
+	case LCK_PREAD:
+	case LCK_READ:
+		status = _lock_resource(lock1, SA_LCK_PR_LOCK_MODE, flags, lockid);
+		if (status)
+			goto out;
+		_unlock_resource(lock2, *lockid);
+		break;
+
+	case LCK_WRITE:
+		status = _lock_resource(lock2, SA_LCK_EX_LOCK_MODE, flags, lockid);
+		if (status)
+			goto out;
+		_unlock_resource(lock1, *lockid);
+		break;
+
+	default:
+		status = -1;
+		errno = EINVAL;
+		break;
+	}
+out:
+	*lockid = mode;
+	return status;
+}
+
+static int _sync_unlock(const char *resource, int lockid)
+{
+	int status = 0;
+	char lock1[strlen(resource)+3];
+	char lock2[strlen(resource)+3];
+
+	snprintf(lock1, sizeof(lock1), "%s-1", resource);
+	snprintf(lock2, sizeof(lock2), "%s-2", resource);
+
+	_unlock_resource(lock1, lockid);
+	_unlock_resource(lock2, lockid);
+
+	return status;
+}
+
+/* We are always quorate ! */
+static int _is_quorate()
+{
+	return 1;
+}
+
+static int _get_main_cluster_fd(void)
+{
+	int select_fd;
+
+	cpg_fd_get(cpg_handle, &select_fd);
+	return select_fd;
+}
+
+static int _cluster_fd_callback(struct local_client *fd, char *buf, int len,
+				const char *csid,
+				struct local_client **new_client)
+{
+	cluster_client = fd;
+	*new_client = NULL;
+	cpg_dispatch(cpg_handle, SA_DISPATCH_ONE);
+	return 1;
+}
+
+static int _cluster_send_message(const void *buf, int msglen, const char *csid,
+				 const char *errtext)
+{
+	struct iovec iov[2];
+	SaAisErrorT err;
+	int target_node;
+
+	if (csid)
+		memcpy(&target_node, csid, OPENAIS_CSID_LEN);
+	else
+		target_node = 0;
+
+	iov[0].iov_base = &target_node;
+	iov[0].iov_len = sizeof(int);
+	iov[1].iov_base = (char *)buf;
+	iov[1].iov_len = msglen;
+
+	err = cpg_mcast_joined(cpg_handle, CPG_TYPE_AGREED, iov, 2);
+	return ais_to_errno(err);
+}
+
+/* We don't have a cluster name to report here */
+static int _get_cluster_name(char *buf, int buflen)
+{
+	strncpy(buf, "OpenAIS", buflen);
+	return 0;
+}
+
+static struct cluster_ops _cluster_openais_ops = {
+	.cluster_init_completed   = NULL,
+	.cluster_send_message     = _cluster_send_message,
+	.name_from_csid           = _name_from_csid,
+	.csid_from_name           = _csid_from_name,
+	.get_num_nodes            = _get_num_nodes,
+	.cluster_fd_callback      = _cluster_fd_callback,
+	.get_main_cluster_fd      = _get_main_cluster_fd,
+	.cluster_do_node_callback = _cluster_do_node_callback,
+	.is_quorate               = _is_quorate,
+	.get_our_csid             = _get_our_csid,
+	.add_up_node              = _add_up_node,
+	.reread_config            = NULL,
+	.cluster_closedown        = _cluster_closedown,
+	.get_cluster_name         = _get_cluster_name,
+	.sync_lock                = _sync_lock,
+	.sync_unlock              = _sync_unlock,
+};
+
+struct cluster_ops *init_openais_cluster(void)
+{
+	if (!_init_cluster())
+		return &_cluster_openais_ops;
+	else
+		return NULL;
+}
--- LVM2/daemons/clvmd/Makefile.in	2007/01/11 17:12:27	1.17
+++ LVM2/daemons/clvmd/Makefile.in	2007/05/21 10:52:01	1.18
@@ -30,9 +30,16 @@
 	CMAN = yes
 endif
 
+ifeq ("@CLVMD@", "openais")
+	OPENAIS = yes
+	GULM = no
+	CMAN = no
+endif
+
 ifeq ("@CLVMD@", "all")
 	GULM = yes
 	CMAN = yes
+	OPENAIS = no
 endif
 
 ifeq ("@DEBUG@", "yes")
@@ -51,6 +58,12 @@
 	DEFS += -DUSE_CMAN
 endif
 
+ifeq ("$(OPENAIS)", "yes")
+	SOURCES += clvmd-openais.c
+	LMLIBS += -lSaLck -lSaClm -lcpg
+	DEFS += -DUSE_OPENAIS
+endif
+
 TARGETS = \
 	clvmd
 
--- LVM2/daemons/clvmd/clvmd-comms.h	2007/05/02 12:22:40	1.7
+++ LVM2/daemons/clvmd/clvmd-comms.h	2007/05/21 10:52:01	1.8
@@ -75,6 +75,23 @@
 struct cluster_ops *init_cman_cluster(void);
 #endif
 
+#ifdef USE_OPENAIS
+#  include <openais/saAis.h>
+#  include <openais/totem/totem.h>
+#  define OPENAIS_CSID_LEN (sizeof(int))
+#  define OPENAIS_MAX_CLUSTER_MESSAGE         MESSAGE_SIZE_MAX
+#  define OPENAIS_MAX_CLUSTER_MEMBER_NAME_LEN SA_MAX_NAME_LENGTH
+#  ifndef MAX_CLUSTER_MEMBER_NAME_LEN
+#    define MAX_CLUSTER_MEMBER_NAME_LEN       SA_MAX_NAME_LENGTH
+#  endif
+#  ifndef CMAN_MAX_CLUSTER_MESSAGE
+#    define CMAN_MAX_CLUSTER_MESSAGE          MESSAGE_SIZE_MAX
+#  endif
+#  ifndef MAX_CSID_LEN
+#    define MAX_CSID_LEN sizeof(int)
+#  endif
+struct cluster_ops *init_openais_cluster(void);
+#endif
 
 
 #endif
--- LVM2/daemons/clvmd/clvmd.c	2007/05/02 12:22:40	1.36
+++ LVM2/daemons/clvmd/clvmd.c	2007/05/21 10:52:01	1.37
@@ -296,6 +296,15 @@
 			syslog(LOG_NOTICE, "Cluster LVM daemon started - connected to GULM");
 		}
 #endif
+#ifdef USE_OPENAIS
+	if (!clops)
+		if ((clops = init_openais_cluster())) {
+			max_csid_len = OPENAIS_CSID_LEN;
+			max_cluster_message = OPENAIS_MAX_CLUSTER_MESSAGE;
+			max_cluster_member_name_len = OPENAIS_MAX_CLUSTER_MEMBER_NAME_LEN;
+			syslog(LOG_NOTICE, "Cluster LVM daemon started - connected to OpenAIS");
+		}
+#endif
 
 	if (!clops) {
 		DEBUGLOG("Can't initialise cluster interface\n");


^ permalink raw reply	[flat|nested] 9+ messages in thread

* LVM2 ./WHATS_NEW daemons/clvmd/Makefile.in dae ...
@ 2006-10-04  8:22 pcaulfield
  0 siblings, 0 replies; 9+ messages in thread
From: pcaulfield @ 2006-10-04  8:22 UTC (permalink / raw)
  To: lvm2-cvs

CVSROOT:	/cvs/lvm2
Module name:	LVM2
Changes by:	pcaulfield@sourceware.org	2006-10-04 08:22:16

Modified files:
	.              : WHATS_NEW 
	daemons/clvmd  : Makefile.in clvm.h clvmd-command.c clvmd.c 
	                 lvm-functions.c lvm-functions.h 
Added files:
	daemons/clvmd  : refresh_clvmd.c refresh_clvmd.h 

Log message:
	Add -R switch to clvmd.
	This option will instruct all the clvmd daemons in the cluster to reload their device cache

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/WHATS_NEW.diff?cvsroot=lvm2&r1=1.448&r2=1.449
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/refresh_clvmd.c.diff?cvsroot=lvm2&r1=NONE&r2=1.1
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/refresh_clvmd.h.diff?cvsroot=lvm2&r1=NONE&r2=1.1
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/Makefile.in.diff?cvsroot=lvm2&r1=1.15&r2=1.16
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/clvm.h.diff?cvsroot=lvm2&r1=1.2&r2=1.3
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/clvmd-command.c.diff?cvsroot=lvm2&r1=1.8&r2=1.9
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/clvmd.c.diff?cvsroot=lvm2&r1=1.26&r2=1.27
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/lvm-functions.c.diff?cvsroot=lvm2&r1=1.21&r2=1.22
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/lvm-functions.h.diff?cvsroot=lvm2&r1=1.2&r2=1.3

--- LVM2/WHATS_NEW	2006/10/03 17:55:19	1.448
+++ LVM2/WHATS_NEW	2006/10/04 08:22:15	1.449
@@ -1,5 +1,6 @@
 Version 2.02.11 - 
 =====================================
+  Add -R to clvmd which tells running clvmds to reload their device cache.
   Add LV column to reports listing kernel modules needed for activation.
   Show available fields if report given invalid field. (e.g. lvs -o list)
   Add timestamp functions with --disable-realtime configure option.
/cvs/lvm2/LVM2/daemons/clvmd/refresh_clvmd.c,v  -->  standard output
revision 1.1
--- LVM2/daemons/clvmd/refresh_clvmd.c
+++ -	2006-10-04 08:22:16.589972000 +0000
@@ -0,0 +1,334 @@
+/*
+ * Copyright (C) 2002-2004 Sistina Software, Inc. All rights reserved.
+ * Copyright (C) 2004-2006 Red Hat, Inc. All rights reserved.
+ *
+ * This file is part of LVM2.
+ *
+ * This copyrighted material is made available to anyone wishing to use,
+ * modify, copy, or redistribute it subject to the terms and conditions
+ * of the GNU General Public License v.2.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software Foundation,
+ * Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ */
+
+/*
+ * Tell all clvmds in a cluster to refresh their toolcontext
+ *
+ */
+
+#include <stddef.h>
+#include <sys/socket.h>
+#include <sys/un.h>
+#include <errno.h>
+#include <unistd.h>
+#include <libdevmapper.h>
+#include <stdint.h>
+#include <stdio.h>
+
+#include "clvm.h"
+#include "refresh_clvmd.h"
+
+typedef struct lvm_response {
+	char node[255];
+	char *response;
+	int status;
+	int len;
+} lvm_response_t;
+
+/*
+ * This gets stuck at the start of memory we allocate so we
+ * can sanity-check it at deallocation time
+ */
+#define LVM_SIGNATURE 0x434C564D
+
+static int _clvmd_sock = -1;
+
+/* Open connection to the Cluster Manager daemon */
+static int _open_local_sock(void)
+{
+	int local_socket;
+	struct sockaddr_un sockaddr;
+
+	/* Open local socket */
+	if ((local_socket = socket(PF_UNIX, SOCK_STREAM, 0)) < 0) {
+		fprintf(stderr, "Local socket creation failed: %s", strerror(errno));
+		return -1;
+	}
+
+	memset(&sockaddr, 0, sizeof(sockaddr));
+	memcpy(sockaddr.sun_path, CLVMD_SOCKNAME, sizeof(CLVMD_SOCKNAME));
+
+	sockaddr.sun_family = AF_UNIX;
+
+	if (connect(local_socket,(struct sockaddr *) &sockaddr,
+		    sizeof(sockaddr))) {
+		int saved_errno = errno;
+
+		fprintf(stderr, "connect() failed on local socket: %s\n",
+			  strerror(errno));
+		if (close(local_socket))
+			return -1;
+
+		errno = saved_errno;
+		return -1;
+	}
+
+	return local_socket;
+}
+
+/* Send a request and return the status */
+static int _send_request(char *inbuf, int inlen, char **retbuf)
+{
+	char outbuf[PIPE_BUF];
+	struct clvm_header *outheader = (struct clvm_header *) outbuf;
+	int len;
+	int off;
+	int buflen;
+	int err;
+
+	/* Send it to CLVMD */
+ rewrite:
+	if ( (err = write(_clvmd_sock, inbuf, inlen)) != inlen) {
+	        if (err == -1 && errno == EINTR)
+		        goto rewrite;
+		fprintf(stderr, "Error writing data to clvmd: %s", strerror(errno));
+		return 0;
+	}
+
+	/* Get the response */
+ reread:
+	if ((len = read(_clvmd_sock, outbuf, sizeof(struct clvm_header))) < 0) {
+	        if (errno == EINTR)
+		        goto reread;
+		fprintf(stderr, "Error reading data from clvmd: %s", strerror(errno));
+		return 0;
+	}
+
+	if (len == 0) {
+		fprintf(stderr, "EOF reading CLVMD");
+		errno = ENOTCONN;
+		return 0;
+	}
+
+	/* Allocate buffer */
+	buflen = len + outheader->arglen;
+	*retbuf = dm_malloc(buflen);
+	if (!*retbuf) {
+		errno = ENOMEM;
+		return 0;
+	}
+
+	/* Copy the header */
+	memcpy(*retbuf, outbuf, len);
+	outheader = (struct clvm_header *) *retbuf;
+
+	/* Read the returned values */
+	off = 1;		/* we've already read the first byte */
+	while (off <= outheader->arglen && len > 0) {
+		len = read(_clvmd_sock, outheader->args + off,
+			   buflen - off - offsetof(struct clvm_header, args));
+		if (len > 0)
+			off += len;
+	}
+
+	/* Was it an error ? */
+	if (outheader->status != 0) {
+		errno = outheader->status;
+
+		/* Only return an error here if there are no node-specific
+		   errors present in the message that might have more detail */
+		if (!(outheader->flags & CLVMD_FLAG_NODEERRS)) {
+			fprintf(stderr, "cluster request failed: %s\n", strerror(errno));
+			return 0;
+		}
+
+	}
+
+	return 1;
+}
+
+/* Build the structure header and parse-out wildcard node names */
+static void _build_header(struct clvm_header *head, int cmd, const char *node,
+			  int len)
+{
+	head->cmd = cmd;
+	head->status = 0;
+	head->flags = 0;
+	head->clientid = 0;
+	head->arglen = len;
+
+	if (node) {
+		/*
+		 * Allow a couple of special node names:
+		 * "*" for all nodes,
+		 * "." for the local node only
+		 */
+		if (strcmp(node, "*") == 0) {
+			head->node[0] = '\0';
+		} else if (strcmp(node, ".") == 0) {
+			head->node[0] = '\0';
+			head->flags = CLVMD_FLAG_LOCAL;
+		} else
+			strcpy(head->node, node);
+	} else
+		head->node[0] = '\0';
+}
+
+/*
+ * Send a message to a(or all) node(s) in the cluster and wait for replies
+ */
+static int _cluster_request(char cmd, const char *node, void *data, int len,
+			   lvm_response_t ** response, int *num)
+{
+	char outbuf[sizeof(struct clvm_header) + len + strlen(node) + 1];
+	int *outptr;
+	char *inptr;
+	char *retbuf = NULL;
+	int status;
+	int i;
+	int num_responses = 0;
+	struct clvm_header *head = (struct clvm_header *) outbuf;
+	lvm_response_t *rarray;
+
+	*num = 0;
+
+	if (_clvmd_sock == -1)
+		_clvmd_sock = _open_local_sock();
+
+	if (_clvmd_sock == -1)
+		return 0;
+
+	_build_header(head, cmd, node, len);
+	memcpy(head->node + strlen(head->node) + 1, data, len);
+
+	status = _send_request(outbuf, sizeof(struct clvm_header) +
+			      strlen(head->node) + len, &retbuf);
+	if (!status)
+		goto out;
+
+	/* Count the number of responses we got */
+	head = (struct clvm_header *) retbuf;
+	inptr = head->args;
+	while (inptr[0]) {
+		num_responses++;
+		inptr += strlen(inptr) + 1;
+		inptr += sizeof(int);
+		inptr += strlen(inptr) + 1;
+	}
+
+	/*
+	 * Allocate response array.
+	 * With an extra pair of INTs on the front to sanity
+	 * check the pointer when we are given it back to free
+	 */
+	outptr = dm_malloc(sizeof(lvm_response_t) * num_responses +
+			    sizeof(int) * 2);
+	if (!outptr) {
+		errno = ENOMEM;
+		status = 0;
+		goto out;
+	}
+
+	*response = (lvm_response_t *) (outptr + 2);
+	outptr[0] = LVM_SIGNATURE;
+	outptr[1] = num_responses;
+	rarray = *response;
+
+	/* Unpack the response into an lvm_response_t array */
+	inptr = head->args;
+	i = 0;
+	while (inptr[0]) {
+		strcpy(rarray[i].node, inptr);
+		inptr += strlen(inptr) + 1;
+
+		memcpy(&rarray[i].status, inptr, sizeof(int));
+		inptr += sizeof(int);
+
+		rarray[i].response = dm_malloc(strlen(inptr) + 1);
+		if (rarray[i].response == NULL) {
+			/* Free up everything else and return error */
+			int j;
+			for (j = 0; j < i; j++)
+				dm_free(rarray[i].response);
+			free(outptr);
+			errno = ENOMEM;
+			status = -1;
+			goto out;
+		}
+
+		strcpy(rarray[i].response, inptr);
+		rarray[i].len = strlen(inptr);
+		inptr += strlen(inptr) + 1;
+		i++;
+	}
+	*num = num_responses;
+	*response = rarray;
+
+      out:
+	if (retbuf)
+		dm_free(retbuf);
+
+	return status;
+}
+
+/* Free reply array */
+static int _cluster_free_request(lvm_response_t * response)
+{
+	int *ptr = (int *) response - 2;
+	int i;
+	int num;
+
+	/* Check it's ours to free */
+	if (response == NULL || *ptr != LVM_SIGNATURE) {
+		errno = EINVAL;
+		return 0;
+	}
+
+	num = ptr[1];
+
+	for (i = 0; i < num; i++) {
+		dm_free(response[i].response);
+	}
+
+	dm_free(ptr);
+
+	return 1;
+}
+
+int refresh_clvmd()
+{
+	int num_responses;
+	char args[1]; // No args really.
+	lvm_response_t *response;
+	int saved_errno;
+	int status;
+	int i;
+
+	status = _cluster_request(CLVMD_CMD_REFRESH, "*", args, 0, &response, &num_responses);
+
+	/* If any nodes were down then display them and return an error */
+	for (i = 0; i < num_responses; i++) {
+		if (response[i].status == EHOSTDOWN) {
+			fprintf(stderr, "clvmd not running on node %s",
+				  response[i].node);
+			status = 0;
+			errno = response[i].status;
+		} else if (response[i].status) {
+			fprintf(stderr, "Error resetting node %s: %s",
+				  response[i].node,
+				  response[i].response[0] ?
+				  	response[i].response :
+				  	strerror(response[i].status));
+			status = 0;
+			errno = response[i].status;
+		}
+	}
+
+	saved_errno = errno;
+	_cluster_free_request(response);
+	errno = saved_errno;
+
+	return status;
+}
/cvs/lvm2/LVM2/daemons/clvmd/refresh_clvmd.h,v  -->  standard output
revision 1.1
--- LVM2/daemons/clvmd/refresh_clvmd.h
+++ -	2006-10-04 08:22:16.674075000 +0000
@@ -0,0 +1,2 @@
+int refresh_clvmd(void);
+
--- LVM2/daemons/clvmd/Makefile.in	2006/05/16 16:48:30	1.15
+++ LVM2/daemons/clvmd/Makefile.in	2006/10/04 08:22:16	1.16
@@ -19,6 +19,7 @@
 	clvmd-command.c  \
 	clvmd.c          \
 	lvm-functions.c  \
+	refresh_clvmd.c \
 	system-lv.c
 
 ifeq ("@CLVMD@", "gulm")
--- LVM2/daemons/clvmd/clvm.h	2005/01/21 11:35:24	1.2
+++ LVM2/daemons/clvmd/clvm.h	2006/10/04 08:22:16	1.3
@@ -63,4 +63,7 @@
 #define CLVMD_CMD_LOCK_LV           50
 #define CLVMD_CMD_LOCK_VG           51
 
+/* Misc functions */
+#define CLVMD_CMD_REFRESH	    40
+
 #endif
--- LVM2/daemons/clvmd/clvmd-command.c	2006/05/12 19:16:48	1.8
+++ LVM2/daemons/clvmd/clvmd-command.c	2006/10/04 08:22:16	1.9
@@ -122,6 +122,10 @@
 		}
 		break;
 
+	case CLVMD_CMD_REFRESH:
+		do_refresh_cache();
+		break;
+
 	default:
 		/* Won't get here because command is validated in pre_command */
 		break;
@@ -222,6 +226,9 @@
 		status = pre_lock_lv(lock_cmd, lock_flags, lockname);
 		break;
 
+	case CLVMD_CMD_REFRESH:
+		break;
+
 	default:
 		log_error("Unknown command %d received\n", header->cmd);
 		status = EINVAL;
--- LVM2/daemons/clvmd/clvmd.c	2006/03/14 14:18:34	1.26
+++ LVM2/daemons/clvmd/clvmd.c	2006/10/04 08:22:16	1.27
@@ -42,6 +42,7 @@
 #include "clvm.h"
 #include "version.h"
 #include "clvmd.h"
+#include "refresh_clvmd.h"
 #include "libdlm.h"
 #include "system-lv.h"
 #include "list.h"
@@ -143,6 +144,7 @@
 	fprintf(file, "   -V       Show version of clvmd\n");
 	fprintf(file, "   -h       Show this help information\n");
 	fprintf(file, "   -d       Don't fork, run in the foreground\n");
+	fprintf(file, "   -R       Tell all running clvmds in the cluster to reload their device cache\n");
 	fprintf(file, "   -t<secs> Command timeout (default 60 seconds)\n");
 	fprintf(file, "\n");
 }
@@ -173,7 +175,7 @@
 	/* Deal with command-line arguments */
 	opterr = 0;
 	optind = 0;
-	while ((opt = getopt(argc, argv, "?vVhdt:")) != EOF) {
+	while ((opt = getopt(argc, argv, "?vVhdt:R")) != EOF) {
 		switch (opt) {
 		case 'h':
 			usage(argv[0], stdout);
@@ -183,6 +185,9 @@
 			usage(argv[0], stderr);
 			exit(0);
 
+		case 'R':
+			return refresh_clvmd();
+
 		case 'd':
 			debug++;
 			break;
--- LVM2/daemons/clvmd/lvm-functions.c	2006/08/22 09:49:20	1.21
+++ LVM2/daemons/clvmd/lvm-functions.c	2006/10/04 08:22:16	1.22
@@ -416,6 +416,13 @@
 	return status == 1 ? 0 : EBUSY;
 }
 
+int do_refresh_cache()
+{
+	DEBUGLOG("Refreshing context\n");
+	log_notice("Refreshing context");
+	return refresh_toolcontext(cmd)==1?0:-1;
+}
+
 
 /* Only called at gulm startup. Drop any leftover VG or P_orphan locks
    that might be hanging around if we died for any reason
--- LVM2/daemons/clvmd/lvm-functions.h	2005/03/07 17:03:44	1.2
+++ LVM2/daemons/clvmd/lvm-functions.h	2006/10/04 08:22:16	1.3
@@ -25,6 +25,7 @@
 extern int post_lock_lv(unsigned char lock_cmd, unsigned char lock_flags,
 			char *resource);
 extern int do_check_lvm1(char *vgname);
+extern int do_refresh_cache(void);
 extern int init_lvm(int using_gulm);
 extern void init_lvhash(void);
 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* LVM2 ./WHATS_NEW daemons/clvmd/Makefile.in dae ...
@ 2006-03-14 14:18 pcaulfield
  0 siblings, 0 replies; 9+ messages in thread
From: pcaulfield @ 2006-03-14 14:18 UTC (permalink / raw)
  To: lvm2-cvs

CVSROOT:	/cvs/lvm2
Module name:	LVM2
Changes by:	pcaulfield@sourceware.org	2006-03-14 14:18:34

Modified files:
	.              : WHATS_NEW 
	daemons/clvmd  : Makefile.in clvmd-cman.c clvmd-comms.h clvmd.c 

Log message:
	Get clvmd to use libcman rather than cman ioctl calls. This makes
	it forward-compatible with the new userland CMAN in cluster head.
	
	To build it you will need the libcman header & library installed.

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/WHATS_NEW.diff?cvsroot=lvm2&r1=1.347&r2=1.348
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/Makefile.in.diff?cvsroot=lvm2&r1=1.12&r2=1.13
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/clvmd-cman.c.diff?cvsroot=lvm2&r1=1.11&r2=1.12
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/clvmd-comms.h.diff?cvsroot=lvm2&r1=1.3&r2=1.4
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/daemons/clvmd/clvmd.c.diff?cvsroot=lvm2&r1=1.25&r2=1.26


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2009-02-11 10:13 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2005-02-21 15:58 LVM2 ./WHATS_NEW daemons/clvmd/Makefile.in dae pcaulfield
2006-03-14 14:18 pcaulfield
2006-10-04  8:22 pcaulfield
2007-05-21 10:52 pcaulfield
2007-06-14 10:16 pcaulfield
2007-06-25  9:02 pcaulfield
2009-01-22 10:21 ccaulfield
2009-02-02 14:34 ccaulfield
2009-02-11 10:13 ccaulfield

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).