public inbox for lvm2-cvs@sourceware.org
help / color / mirror / Atom feed
* LVM2 ./WHATS_NEW lib/metadata/raid_manip.c
@ 2012-04-11 1:23 jbrassow
0 siblings, 0 replies; 8+ messages in thread
From: jbrassow @ 2012-04-11 1:23 UTC (permalink / raw)
To: lvm-devel, lvm2-cvs
CVSROOT: /cvs/lvm2
Module name: LVM2
Changes by: jbrassow@sourceware.org 2012-04-11 01:23:29
Modified files:
. : WHATS_NEW
lib/metadata : raid_manip.c
Log message:
RAID LVs could not handle a down-convert if a device other than the last one
in the array was specified for removal. This change addresses that (bz806111).
Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/WHATS_NEW.diff?cvsroot=lvm2&r1=1.2376&r2=1.2377
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/lib/metadata/raid_manip.c.diff?cvsroot=lvm2&r1=1.25&r2=1.26
--- LVM2/WHATS_NEW 2012/04/10 23:34:41 1.2376
+++ LVM2/WHATS_NEW 2012/04/11 01:23:29 1.2377
@@ -1,5 +1,6 @@
Version 2.02.96 -
================================
+ Fix problems when specifying PVs during RAID down-converts.
Fix ability to handle failures in mirrored log (regression intro 2.02.89).
Fix unlocking volume group in vgreduce in error path.
Exit immediately if LISTEN_PID env var incorrect during systemd handover.
--- LVM2/lib/metadata/raid_manip.c 2012/03/15 20:00:54 1.25
+++ LVM2/lib/metadata/raid_manip.c 2012/04/11 01:23:29 1.26
@@ -975,6 +975,8 @@
static int _raid_remove_images(struct logical_volume *lv,
uint32_t new_count, struct dm_list *pvs)
{
+ uint32_t s;
+ struct lv_segment *seg;
struct dm_list removal_list;
struct lv_list *lvl;
@@ -1024,9 +1026,21 @@
}
/*
- * Resume original LV
- * This also resumes all other sub-LVs
+ * Resume the remaining LVs
+ * We must start by resuming the sub-LVs first (which would
+ * otherwise be handled automatically) because the shifting
+ * of positions could otherwise cause name collisions. For
+ * example, if position 0 of a 3-way array is removed, position
+ * 1 and 2 must be shifted and renamed 0 and 1. If position 2
+ * tries to rename first, it will collide with the existing
+ * position 1.
*/
+ seg = first_seg(lv);
+ for (s = 0; (new_count > 1) && (s < seg->area_count); s++) {
+ if (!resume_lv(lv->vg->cmd, seg_lv(seg, s)) ||
+ !resume_lv(lv->vg->cmd, seg_metalv(seg, s)))
+ return_0;
+ }
if (!resume_lv(lv->vg->cmd, lv)) {
log_error("Failed to resume %s/%s after committing changes",
lv->vg->name, lv->name);
^ permalink raw reply [flat|nested] 8+ messages in thread
* LVM2 ./WHATS_NEW lib/metadata/raid_manip.c
@ 2012-04-24 20:05 jbrassow
0 siblings, 0 replies; 8+ messages in thread
From: jbrassow @ 2012-04-24 20:05 UTC (permalink / raw)
To: lvm-devel, lvm2-cvs
CVSROOT: /cvs/lvm2
Module name: LVM2
Changes by: jbrassow@sourceware.org 2012-04-24 20:05:31
Modified files:
. : WHATS_NEW
lib/metadata : raid_manip.c
Log message:
Allow a subset of failed devices to be replaced in RAID LVs.
If two devices in an array failed, it was previously impossible to replace
just one of them. This patch allows for the replacement of some, but perhaps
not all, failed devices.
Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/WHATS_NEW.diff?cvsroot=lvm2&r1=1.2390&r2=1.2391
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/lib/metadata/raid_manip.c.diff?cvsroot=lvm2&r1=1.28&r2=1.29
--- LVM2/WHATS_NEW 2012/04/24 20:00:03 1.2390
+++ LVM2/WHATS_NEW 2012/04/24 20:05:31 1.2391
@@ -1,5 +1,6 @@
Version 2.02.96 -
================================
+ Allow subset of failed devices to be replaced in RAID LVs.
Prevent resume from creating error devices that already exist from suspend.
Improve clmvd singlenode locking for better testing.
Update and correct lvs man page with supported column names.
--- LVM2/lib/metadata/raid_manip.c 2012/04/12 03:16:37 1.28
+++ LVM2/lib/metadata/raid_manip.c 2012/04/24 20:05:31 1.29
@@ -1632,10 +1632,28 @@
*
* - We need to change the LV names when we insert them.
*/
+try_again:
if (!_alloc_image_components(lv, allocate_pvs, match_count,
&new_meta_lvs, &new_data_lvs)) {
log_error("Failed to allocate replacement images for %s/%s",
lv->vg->name, lv->name);
+
+ /*
+ * If this is a repair, then try to
+ * do better than all-or-nothing
+ */
+ if (match_count > 1) {
+ log_error("Attempting replacement of %u devices"
+ " instead of %u", match_count - 1, match_count);
+ match_count--;
+
+ /*
+ * Since we are replacing some but not all of the bad
+ * devices, we must set partial_activation
+ */
+ lv->vg->cmd->partial_activation = 1;
+ goto try_again;
+ }
return 0;
}
^ permalink raw reply [flat|nested] 8+ messages in thread
* LVM2 ./WHATS_NEW lib/metadata/raid_manip.c
@ 2012-04-12 3:16 jbrassow
0 siblings, 0 replies; 8+ messages in thread
From: jbrassow @ 2012-04-12 3:16 UTC (permalink / raw)
To: lvm-devel, lvm2-cvs
CVSROOT: /cvs/lvm2
Module name: LVM2
Changes by: jbrassow@sourceware.org 2012-04-12 03:16:37
Modified files:
. : WHATS_NEW
lib/metadata : raid_manip.c
Log message:
Fix code that performs RAID device replacement while under snapshot.
The code should have been calling [suspend|resume]_lv_origin() rather than
[suspend|resume]_lv.
This addresses bug 807069.
Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/WHATS_NEW.diff?cvsroot=lvm2&r1=1.2383&r2=1.2384
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/lib/metadata/raid_manip.c.diff?cvsroot=lvm2&r1=1.27&r2=1.28
--- LVM2/WHATS_NEW 2012/04/11 14:20:19 1.2383
+++ LVM2/WHATS_NEW 2012/04/12 03:16:37 1.2384
@@ -1,5 +1,6 @@
Version 2.02.96 -
================================
+ Fix RAID device replacement code so that it works under snapshot.
Fix inability to split RAID1 image while specifying a particular PV.
Update man pages to give them same look&feel.
Fix lvresize of thin pool for stipped devices.
--- LVM2/lib/metadata/raid_manip.c 2012/04/11 14:20:20 1.27
+++ LVM2/lib/metadata/raid_manip.c 2012/04/12 03:16:37 1.28
@@ -1713,7 +1713,7 @@
return 0;
}
- if (!suspend_lv(lv->vg->cmd, lv)) {
+ if (!suspend_lv_origin(lv->vg->cmd, lv)) {
log_error("Failed to suspend %s/%s before committing changes",
lv->vg->name, lv->name);
return 0;
@@ -1725,7 +1725,7 @@
return 0;
}
- if (!resume_lv(lv->vg->cmd, lv)) {
+ if (!resume_lv_origin(lv->vg->cmd, lv)) {
log_error("Failed to resume %s/%s after committing changes",
lv->vg->name, lv->name);
return 0;
@@ -1761,7 +1761,7 @@
return 0;
}
- if (!suspend_lv(lv->vg->cmd, lv)) {
+ if (!suspend_lv_origin(lv->vg->cmd, lv)) {
log_error("Failed to suspend %s/%s before committing changes",
lv->vg->name, lv->name);
return 0;
@@ -1773,7 +1773,7 @@
return 0;
}
- if (!resume_lv(lv->vg->cmd, lv)) {
+ if (!resume_lv_origin(lv->vg->cmd, lv)) {
log_error("Failed to resume %s/%s after committing changes",
lv->vg->name, lv->name);
return 0;
^ permalink raw reply [flat|nested] 8+ messages in thread
* LVM2 ./WHATS_NEW lib/metadata/raid_manip.c
@ 2012-04-11 14:20 jbrassow
0 siblings, 0 replies; 8+ messages in thread
From: jbrassow @ 2012-04-11 14:20 UTC (permalink / raw)
To: lvm-devel, lvm2-cvs
CVSROOT: /cvs/lvm2
Module name: LVM2
Changes by: jbrassow@sourceware.org 2012-04-11 14:20:20
Modified files:
. : WHATS_NEW
lib/metadata : raid_manip.c
Log message:
Fix inability to split RAID1 image while specifying a particular PV.
The logic for resuming the original and newly split LVs was not properly
done to handle situations where anything but the last device in the array
was split. It did not take into account the possible name collisions that
might occur when the original LV undergoes the shifting and renaming of its
sub-LVs.
Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/WHATS_NEW.diff?cvsroot=lvm2&r1=1.2382&r2=1.2383
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/lib/metadata/raid_manip.c.diff?cvsroot=lvm2&r1=1.26&r2=1.27
--- LVM2/WHATS_NEW 2012/04/11 12:42:10 1.2382
+++ LVM2/WHATS_NEW 2012/04/11 14:20:19 1.2383
@@ -1,5 +1,6 @@
Version 2.02.96 -
================================
+ Fix inability to split RAID1 image while specifying a particular PV.
Update man pages to give them same look&feel.
Fix lvresize of thin pool for stipped devices.
For lvresize round upward when specifying number of extents.
--- LVM2/lib/metadata/raid_manip.c 2012/04/11 01:23:29 1.26
+++ LVM2/lib/metadata/raid_manip.c 2012/04/11 14:20:20 1.27
@@ -64,6 +64,24 @@
return seg->area_count;
}
+/*
+ * Resume sub-LVs first, then top-level LV
+ */
+static int _bottom_up_resume(struct logical_volume *lv)
+{
+ uint32_t s;
+ struct lv_segment *seg = first_seg(lv);
+
+ if (seg_is_raid(seg) && (seg->area_count > 1)) {
+ for (s = 0; s < seg->area_count; s++)
+ if (!resume_lv(lv->vg->cmd, seg_lv(seg, s)) ||
+ !resume_lv(lv->vg->cmd, seg_metalv(seg, s)))
+ return_0;
+ }
+
+ return resume_lv(lv->vg->cmd, lv);
+}
+
static int _activate_sublv_preserving_excl(struct logical_volume *top_lv,
struct logical_volume *sub_lv)
{
@@ -975,8 +993,6 @@
static int _raid_remove_images(struct logical_volume *lv,
uint32_t new_count, struct dm_list *pvs)
{
- uint32_t s;
- struct lv_segment *seg;
struct dm_list removal_list;
struct lv_list *lvl;
@@ -1035,13 +1051,7 @@
* tries to rename first, it will collide with the existing
* position 1.
*/
- seg = first_seg(lv);
- for (s = 0; (new_count > 1) && (s < seg->area_count); s++) {
- if (!resume_lv(lv->vg->cmd, seg_lv(seg, s)) ||
- !resume_lv(lv->vg->cmd, seg_metalv(seg, s)))
- return_0;
- }
- if (!resume_lv(lv->vg->cmd, lv)) {
+ if (!_bottom_up_resume(lv)) {
log_error("Failed to resume %s/%s after committing changes",
lv->vg->name, lv->name);
return 0;
@@ -1193,22 +1203,33 @@
}
/*
- * Resume original LV
- * This also resumes all other sub-lvs (including the extracted)
+ * First resume the newly split LV and LVs on the removal list.
+ * This is necessary so that there are no name collisions due to
+ * the original RAID LV having possibly had sub-LVs that have been
+ * shifted and renamed.
+ */
+ if (!resume_lv(cmd, lvl->lv))
+ return_0;
+ dm_list_iterate_items(lvl, &removal_list)
+ if (!resume_lv(cmd, lvl->lv))
+ return_0;
+
+ /*
+ * Resume the remaining LVs
+ * We must start by resuming the sub-LVs first (which would
+ * otherwise be handled automatically) because the shifting
+ * of positions could otherwise cause name collisions. For
+ * example, if position 0 of a 3-way array is split, position
+ * 1 and 2 must be shifted and renamed 0 and 1. If position 2
+ * tries to rename first, it will collide with the existing
+ * position 1.
*/
- if (!resume_lv(cmd, lv)) {
+ if (!_bottom_up_resume(lv)) {
log_error("Failed to resume %s/%s after committing changes",
lv->vg->name, lv->name);
return 0;
}
- /* Recycle newly split LV so it is properly renamed */
- if (!suspend_lv(cmd, lvl->lv) || !resume_lv(cmd, lvl->lv)) {
- log_error("Failed to rename %s to %s after committing changes",
- old_name, split_name);
- return 0;
- }
-
/*
* Eliminate the residual LVs
*/
^ permalink raw reply [flat|nested] 8+ messages in thread
* LVM2 ./WHATS_NEW lib/metadata/raid_manip.c
@ 2012-02-13 11:10 zkabelac
0 siblings, 0 replies; 8+ messages in thread
From: zkabelac @ 2012-02-13 11:10 UTC (permalink / raw)
To: lvm-devel, lvm2-cvs
CVSROOT: /cvs/lvm2
Module name: LVM2
Changes by: zkabelac@sourceware.org 2012-02-13 11:10:37
Modified files:
. : WHATS_NEW
lib/metadata : raid_manip.c
Log message:
Add check for rimage name allocation failure
Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/WHATS_NEW.diff?cvsroot=lvm2&r1=1.2288&r2=1.2289
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/lib/metadata/raid_manip.c.diff?cvsroot=lvm2&r1=1.21&r2=1.22
--- LVM2/WHATS_NEW 2012/02/13 11:09:25 1.2288
+++ LVM2/WHATS_NEW 2012/02/13 11:10:37 1.2289
@@ -1,5 +1,6 @@
Version 2.02.92 -
====================================
+ Add check for rimage name allocation failure in _raid_add_images().
Add check for mda_copy failure in _text_pv_setup().
Add check for _mirrored_init_target failure.
Add free_orphan_vg.
--- LVM2/lib/metadata/raid_manip.c 2012/01/24 14:33:38 1.21
+++ LVM2/lib/metadata/raid_manip.c 2012/02/13 11:10:37 1.22
@@ -655,7 +655,10 @@
if (l == dm_list_last(&data_lvs)) {
lvl = dm_list_item(l, struct lv_list);
len = strlen(lv->name) + strlen("_rimage_XXX");
- name = dm_pool_alloc(lv->vg->vgmem, len);
+ if (!(name = dm_pool_alloc(lv->vg->vgmem, len))) {
+ log_error("Failed to allocate rimage name.");
+ return 0;
+ }
sprintf(name, "%s_rimage_%u", lv->name, count);
lvl->lv->name = name;
continue;
^ permalink raw reply [flat|nested] 8+ messages in thread
* LVM2 ./WHATS_NEW lib/metadata/raid_manip.c
@ 2011-12-01 0:21 jbrassow
0 siblings, 0 replies; 8+ messages in thread
From: jbrassow @ 2011-12-01 0:21 UTC (permalink / raw)
To: lvm-devel, lvm2-cvs
CVSROOT: /cvs/lvm2
Module name: LVM2
Changes by: jbrassow@sourceware.org 2011-12-01 00:21:04
Modified files:
. : WHATS_NEW
lib/metadata : raid_manip.c
Log message:
Don't allow two images to be split and tracked from a RAID LV at one time
Also, don't allow a splitmirror operation on a RAID LV that is already tracking
a split, unless the operation is to stop the tracking and complete the split.
Example:
~> lvconvert --splitmirrors 1 --trackchanges vg/lv /dev/sdc1
# Now tracking changes - image can be merged back or split-off for good
~> lvconvert --splitmirrors 1 -n new_name vg/lv /dev/sdc1
# ^ Completes split ^
If a split is performed on a RAID that is tracking an already split image and
PVs are provided, we must ensure that
1) the already split LV is represented in the PVs
2) we are careful to split only the tracked image
Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/WHATS_NEW.diff?cvsroot=lvm2&r1=1.2202&r2=1.2203
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/lib/metadata/raid_manip.c.diff?cvsroot=lvm2&r1=1.19&r2=1.20
--- LVM2/WHATS_NEW 2011/12/01 00:13:16 1.2202
+++ LVM2/WHATS_NEW 2011/12/01 00:21:04 1.2203
@@ -1,5 +1,6 @@
Version 2.02.89 -
==================================
+ Don't allow two images to be split and tracked from a RAID LV at one time
Don't allow size change of RAID LV that is tracking changes for a split image
Don't allow size change of RAID sub-LVs independently
Don't allow name change of RAID LV that is tracking changes for a split image
--- LVM2/lib/metadata/raid_manip.c 2011/12/01 00:09:35 1.19
+++ LVM2/lib/metadata/raid_manip.c 2011/12/01 00:21:04 1.20
@@ -26,20 +26,32 @@
#define RAID_REGION_SIZE 1024
-int lv_is_raid_with_tracking(const struct logical_volume *lv)
+static int _lv_is_raid_with_tracking(const struct logical_volume *lv,
+ struct logical_volume **tracking)
{
uint32_t s;
struct lv_segment *seg;
- if (lv->status & RAID) {
- seg = first_seg(lv);
+ *tracking = NULL;
+ seg = first_seg(lv);
- for (s = 0; s < seg->area_count; s++)
- if (lv_is_visible(seg_lv(seg, s)) &&
- !(seg_lv(seg, s)->status & LVM_WRITE))
- return 1;
- }
- return 0;
+ if (!(lv->status & RAID))
+ return 0;
+
+ for (s = 0; s < seg->area_count; s++)
+ if (lv_is_visible(seg_lv(seg, s)) &&
+ !(seg_lv(seg, s)->status & LVM_WRITE))
+ *tracking = seg_lv(seg, s);
+
+
+ return *tracking ? 1 : 0;
+}
+
+int lv_is_raid_with_tracking(const struct logical_volume *lv)
+{
+ struct logical_volume *tracking;
+
+ return _lv_is_raid_with_tracking(lv, &tracking);
}
uint32_t lv_raid_image_count(const struct logical_volume *lv)
@@ -1051,6 +1063,8 @@
struct dm_list removal_list, data_list;
struct cmd_context *cmd = lv->vg->cmd;
uint32_t old_count = lv_raid_image_count(lv);
+ struct logical_volume *tracking;
+ struct dm_list tracking_pvs;
dm_list_init(&removal_list);
dm_list_init(&data_list);
@@ -1079,6 +1093,25 @@
return 0;
}
+ /*
+ * We only allow a split while there is tracking if it is to
+ * complete the split of the tracking sub-LV
+ */
+ if (_lv_is_raid_with_tracking(lv, &tracking)) {
+ if (!_lv_is_on_pvs(tracking, splittable_pvs)) {
+ log_error("Unable to split additional image from %s "
+ "while tracking changes for %s",
+ lv->name, tracking->name);
+ return 0;
+ } else {
+ /* Ensure we only split the tracking image */
+ dm_list_init(&tracking_pvs);
+ splittable_pvs = &tracking_pvs;
+ if (!_get_pv_list_for_lv(tracking, splittable_pvs))
+ return_0;
+ }
+ }
+
if (!_raid_extract_images(lv, new_count, splittable_pvs, 1,
&removal_list, &data_list)) {
log_error("Failed to extract images from %s/%s",
@@ -1181,6 +1214,12 @@
return 0;
}
+ /* Cannot track two split images at once */
+ if (lv_is_raid_with_tracking(lv)) {
+ log_error("Cannot track more than one split image at a time");
+ return 0;
+ }
+
for (s = seg->area_count - 1; s >= 0; s--) {
if (!_lv_is_on_pvs(seg_lv(seg, s), splittable_pvs))
continue;
^ permalink raw reply [flat|nested] 8+ messages in thread
* LVM2 ./WHATS_NEW lib/metadata/raid_manip.c
@ 2011-09-13 16:33 jbrassow
0 siblings, 0 replies; 8+ messages in thread
From: jbrassow @ 2011-09-13 16:33 UTC (permalink / raw)
To: lvm-devel, lvm2-cvs
CVSROOT: /cvs/lvm2
Module name: LVM2
Changes by: jbrassow@sourceware.org 2011-09-13 16:33:22
Modified files:
. : WHATS_NEW
lib/metadata : raid_manip.c
Log message:
Changing RAID status flags to 64-bit broke some binary flag operations.
LVM_WRITE is a 32-bit flag. Now that RAID[_IMAGE|_META] are 64-bit,
and'ing a RAID LV's status against LVM_WRITE can reset the higher order
flags.
A similar thing will affect thinp flags if not careful.
Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/WHATS_NEW.diff?cvsroot=lvm2&r1=1.2103&r2=1.2104
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/lib/metadata/raid_manip.c.diff?cvsroot=lvm2&r1=1.12&r2=1.13
--- LVM2/WHATS_NEW 2011/09/13 14:37:48 1.2103
+++ LVM2/WHATS_NEW 2011/09/13 16:33:21 1.2104
@@ -1,5 +1,6 @@
Version 2.02.89 -
==================================
+ Fix improper RAID 64-bit status flag reset when and'ing against 32-bit flag.
Fix log size calculation when only a log is being added to a mirror.
Work around resume_lv causing error LV scanning during splitmirror operation.
Add 7th lv_attr char to show the related kernel target.
--- LVM2/lib/metadata/raid_manip.c 2011/09/06 18:49:32 1.12
+++ LVM2/lib/metadata/raid_manip.c 2011/09/13 16:33:21 1.13
@@ -971,7 +971,11 @@
if (!_lv_is_on_pvs(seg_lv(seg, s), splittable_pvs))
continue;
lv_set_visible(seg_lv(seg, s));
- seg_lv(seg, s)->status &= ~LVM_WRITE;
+ /*
+ * LVM_WRITE is 32-bit, if we don't '|' it with
+ * UINT64_C(0) it will remove all higher order flags
+ */
+ seg_lv(seg, s)->status &= ~(UINT64_C(0) | LVM_WRITE);
break;
}
^ permalink raw reply [flat|nested] 8+ messages in thread
* LVM2 ./WHATS_NEW lib/metadata/raid_manip.c
@ 2011-08-18 19:31 jbrassow
0 siblings, 0 replies; 8+ messages in thread
From: jbrassow @ 2011-08-18 19:31 UTC (permalink / raw)
To: lvm-devel, lvm2-cvs
CVSROOT: /cvs/lvm2
Module name: LVM2
Changes by: jbrassow@sourceware.org 2011-08-18 19:31:33
Modified files:
. : WHATS_NEW
lib/metadata : raid_manip.c
Log message:
When down-converting RAID1, don't activate sub-lvs between suspend/resume
of top-level LV.
We can't activate sub-lv's that are being removed from a RAID1 LV while it
is suspended. However, this is what was being used to have them show-up
so we could remove them. 'sync_local_dev_names' is a sufficient and
proper replacement and can be done after the top-level LV is resumed.
Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/WHATS_NEW.diff?cvsroot=lvm2&r1=1.2074&r2=1.2075
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/lib/metadata/raid_manip.c.diff?cvsroot=lvm2&r1=1.4&r2=1.5
--- LVM2/WHATS_NEW 2011/08/17 15:15:36 1.2074
+++ LVM2/WHATS_NEW 2011/08/18 19:31:33 1.2075
@@ -1,5 +1,6 @@
Version 2.02.88 -
==================================
+ When down-converting RAID1, don't activate sub-lvs between suspend/resume
Add -V as short form of --virtualsize in lvcreate.
Fix make clean not to remove Makefile. (2.02.87)
--- LVM2/lib/metadata/raid_manip.c 2011/08/13 04:28:34 1.4
+++ LVM2/lib/metadata/raid_manip.c 2011/08/18 19:31:33 1.5
@@ -488,22 +488,9 @@
}
/*
- * Bring extracted LVs into existance, so there are no
- * conflicts for the main RAID device's resume
+ * Resume original LV
+ * This also resumes all other sub-lvs (including the extracted)
*/
- if (!dm_list_empty(&removal_list)) {
- dm_list_iterate_items(lvl, &removal_list) {
- /* If top RAID was EX, use EX */
- if (lv_is_active_exclusive_locally(lv)) {
- if (!activate_lv_excl(lv->vg->cmd, lvl->lv))
- return_0;
- } else {
- if (!activate_lv(lv->vg->cmd, lvl->lv))
- return_0;
- }
- }
- }
-
if (!resume_lv(lv->vg->cmd, lv)) {
log_error("Failed to resume %s/%s after committing changes",
lv->vg->name, lv->name);
@@ -513,6 +500,7 @@
/*
* Eliminate the extracted LVs
*/
+ sync_local_dev_names(lv->vg->cmd);
if (!dm_list_empty(&removal_list)) {
dm_list_iterate_items(lvl, &removal_list) {
if (!deactivate_lv(lv->vg->cmd, lvl->lv))
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2012-04-24 20:05 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-04-11 1:23 LVM2 ./WHATS_NEW lib/metadata/raid_manip.c jbrassow
-- strict thread matches above, loose matches on Subject: below --
2012-04-24 20:05 jbrassow
2012-04-12 3:16 jbrassow
2012-04-11 14:20 jbrassow
2012-02-13 11:10 zkabelac
2011-12-01 0:21 jbrassow
2011-09-13 16:33 jbrassow
2011-08-18 19:31 jbrassow
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).