public inbox for lvm2-cvs@sourceware.org
help / color / mirror / Atom feed
From: prajnoha@sourceware.org
To: lvm-devel@redhat.com, lvm2-cvs@sourceware.org
Subject: LVM2 ./WHATS_NEW lib/format_text/format-text.c
Date: Mon, 29 Aug 2011 13:37:00 -0000	[thread overview]
Message-ID: <20110829133737.30499.qmail@sourceware.org> (raw)

CVSROOT:	/cvs/lvm2
Module name:	LVM2
Changes by:	prajnoha@sourceware.org	2011-08-29 13:37:37

Modified files:
	.              : WHATS_NEW 
	lib/format_text: format-text.c 

Log message:
	Directly allocate buffer memory in a pvck scan instead of using a mempool.
	
	There's a very high memory usage when calling _pv_analyse_mda_raw (e.g. while
	executing pvck) that can end up with "out of memory".
	
	_pv_analyse_mda_raw scans for metadata in the MDA, iteratively increasing the
	size to scan with SECTOR_SIZE until we find a probable config section or we're
	at the edge of the metadata area. However, when using a memory pool, we're also
	iteratively chasing for bigger and bigger mempool chunk which can't be found
	and so we're always allocating a new one, consuming more and more memory...
	
	This patch just changes the mempool to direct memory allocation in this
	problematic part of the code.

Patches:
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/WHATS_NEW.diff?cvsroot=lvm2&r1=1.2084&r2=1.2085
http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/lib/format_text/format-text.c.diff?cvsroot=lvm2&r1=1.183&r2=1.184

--- LVM2/WHATS_NEW	2011/08/24 13:41:46	1.2084
+++ LVM2/WHATS_NEW	2011/08/29 13:37:36	1.2085
@@ -1,5 +1,6 @@
 Version 2.02.89 - 
 ==================================
+  Directly allocate buffer memory in a pvck scan instead of using a mempool.
   Add configure --with-thin for (unimplemented) segtypes "thin" and "thin_pool".
   Fix raid shared lib segtype registration (2.02.87).
 
--- LVM2/lib/format_text/format-text.c	2011/08/10 20:25:30	1.183
+++ LVM2/lib/format_text/format-text.c	2011/08/29 13:37:37	1.184
@@ -226,7 +226,7 @@
 		 * "maybe_config_section" returning true when there's no valid
 		 * metadata in a sector (sectors with all nulls).
 		 */
-		if (!(buf = dm_pool_alloc(fmt->cmd->mem, size + size2)))
+		if (!(buf = dm_malloc(size + size2)))
 			goto_out;
 
 		if (!dev_read_circular(area->dev, offset, size,
@@ -261,14 +261,14 @@
 				size += SECTOR_SIZE;
 			}
 		}
-		dm_pool_free(fmt->cmd->mem, buf);
+		dm_free(buf);
 		buf = NULL;
 	}
 
 	r = 1;
  out:
 	if (buf)
-		dm_pool_free(fmt->cmd->mem, buf);
+		dm_free(buf);
 	if (!dev_close(area->dev))
 		stack;
 	return r;


             reply	other threads:[~2011-08-29 13:37 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-08-29 13:37 prajnoha [this message]
  -- strict thread matches above, loose matches on Subject: below --
2012-02-13 11:09 zkabelac
2011-02-03  1:41 zkabelac
2010-10-26  9:13 zkabelac
2010-09-30 14:12 prajnoha
2010-06-01 12:08 prajnoha
2009-07-30 21:15 snitzer
2009-07-30 18:40 snitzer
2009-07-30 17:42 snitzer
2009-07-30 17:19 snitzer
2009-07-30 17:18 snitzer
2009-03-23 21:13 taka
2009-03-03 16:35 mbroz
2008-10-17  0:55 agk
2008-09-30 20:37 agk
2008-08-16  9:46 mbroz
2008-07-31 10:50 agk

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110829133737.30499.qmail@sourceware.org \
    --to=prajnoha@sourceware.org \
    --cc=lvm-devel@redhat.com \
    --cc=lvm2-cvs@sourceware.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).