public inbox for glibc-bugs@sourceware.org help / color / mirror / Atom feed
From: "rich at testardi dot com" <sourceware-bugzilla@sourceware.org> To: glibc-bugs@sources.redhat.com Subject: [Bug libc/11261] malloc uses excessive memory for multi-threaded applications Date: Wed, 10 Feb 2010 13:42:00 -0000 [thread overview] Message-ID: <20100210134158.31120.qmail@sourceware.org> (raw) In-Reply-To: <20100208202339.11261.rich@testardi.com> ------- Additional Comments From rich at testardi dot com 2010-02-10 13:41 ------- Hi Ulrich, Agreed 100% no one size fits all... Unfortunately, the neither of the "tuning" settings for MALLOC_ARENA_MAX nor MALLOC_ARENA_TEST seem to work. Neither do mallopt() M_ARENA_MAX nor M_ARENA_TEST. :-( Part of the problem seems to stem from the fact that the global "narenas" is only incremented if MALLOC_PER_THREAD/use_per_thread is true... #ifdef PER_THREAD if (__builtin_expect (use_per_thread, 0)) { ++narenas; (void)mutex_unlock(&list_lock); } #endif So the tests of those other variables in reused_arena() never limit anything. And setting MALLOC_PER_THREAD makes our problem much worse. static mstate reused_arena (void) { if (narenas <= mp_.arena_test) return NULL; ... if (narenas < narenas_limit) return NULL; I also tried all combinations I could imagine of MALLOC_PER_THREAD and the other variables, to no avail. I also did the same with mallopt(), verifying at the assembly level that we got all the right values into mp_. :-( Specifically, I tried things like: export MALLOC_PER_THREAD=1 export MALLOC_ARENA_MAX=1 export MALLOC_ARENA_TEST=1 and: rv = mallopt(-7, 1); printf("%d\n", rv); rv = mallopt(-8, 1); printf("%d\n", rv); Anyway, thank you. You've already pointed me in all of the right directions. If I did something completely brain-dead, above, feel free to tell me and save me another few days of work! :-) -- Rich -- http://sourceware.org/bugzilla/show_bug.cgi?id=11261 ------- You are receiving this mail because: ------- You are on the CC list for the bug, or are watching someone who is.
next prev parent reply other threads:[~2010-02-10 13:42 UTC|newest] Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top 2010-02-08 20:23 [Bug libc/11261] New: " rich at testardi dot com 2010-02-09 15:28 ` [Bug libc/11261] " drepper at redhat dot com 2010-02-09 16:02 ` rich at testardi dot com 2010-02-10 13:10 ` rich at testardi dot com 2010-02-10 13:21 ` drepper at redhat dot com 2010-02-10 13:42 ` rich at testardi dot com [this message] 2010-02-10 14:29 ` rich at testardi dot com 2010-02-10 15:52 ` rich at testardi dot com [not found] <bug-11261-131@http.sourceware.org/bugzilla/> 2011-08-27 21:45 ` heuler at infosim dot net 2011-08-27 22:02 ` rich at testardi dot com 2011-09-02 7:39 ` heuler at infosim dot net 2011-09-02 7:45 ` heuler at infosim dot net 2011-09-11 15:46 ` drepper.fsp at gmail dot com 2011-09-11 21:32 ` rich at testardi dot com 2012-07-29 10:10 ` zhannk at gmail dot com 2012-12-19 10:47 ` schwab@linux-m68k.org 2013-03-14 19:03 ` carlos at redhat dot com 2013-12-12 0:22 ` neleai at seznam dot cz 2013-12-12 3:32 ` siddhesh at redhat dot com 2013-12-12 8:41 ` neleai at seznam dot cz 2013-12-12 10:48 ` siddhesh at redhat dot com
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20100210134158.31120.qmail@sourceware.org \ --to=sourceware-bugzilla@sourceware.org \ --cc=glibc-bugs@sources.redhat.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).