public inbox for glibc-bugs@sourceware.org help / color / mirror / Atom feed
From: "law at redhat dot com" <sourceware-bugzilla@sourceware.org> To: glibc-bugs@sources.redhat.com Subject: [Bug malloc/13939] New: Malloc can deadlock in retry paths Date: Tue, 03 Apr 2012 03:15:00 -0000 [thread overview] Message-ID: <bug-13939-131@http.sourceware.org/bugzilla/> (raw) http://sourceware.org/bugzilla/show_bug.cgi?id=13939 Bug #: 13939 Summary: Malloc can deadlock in retry paths Product: glibc Version: 2.15 Status: NEW Severity: normal Priority: P2 Component: malloc AssignedTo: unassigned@sourceware.org ReportedBy: law@redhat.com Classification: Unclassified Created attachment 6312 --> http://sourceware.org/bugzilla/attachment.cgi?id=6312 Potential fix Assume we're we're in libc_malloc and the call to _int_malloc has failed because we're unable to sbrk more memory. We (sensibly) have code which will attempt to use mmap to allocate the memory in the inner else clause below: arena_lock(ar_ptr, bytes); if(!ar_ptr) return 0; victim = _int_malloc(ar_ptr, bytes); if(!victim) { /* Maybe the failure is due to running out of mmapped areas. */ if(ar_ptr != &main_arena) { (void)mutex_unlock(&ar_ptr->mutex); ar_ptr = &main_arena; (void)mutex_lock(&ar_ptr->mutex); victim = _int_malloc(ar_ptr, bytes); (void)mutex_unlock(&ar_ptr->mutex); } else { /* ... or sbrk() has failed and there is still a chance to mmap() */ ar_ptr = arena_get2(ar_ptr->next ? ar_ptr : 0, bytes); (void)mutex_unlock(&main_arena.mutex); if(ar_ptr) { victim = _int_malloc(ar_ptr, bytes); (void)mutex_unlock(&ar_ptr->mutex); } } } else (void)mutex_unlock(&ar_ptr->mutex); Make note that the arena referenced by ar_ptr will still be locked when we call arena_get2 in that else clause. Furthermore, we know that ar_ptr must refer to the main arena. Now assume that there are no arenas on the free list and that we've already hit the limit for the number of arenas we're willing to create. In that case arena_get2 will call reused_arena which looks like this: static mstate reused_arena (void) { mstate result; static mstate next_to_use; if (next_to_use == NULL) next_to_use = &main_arena; result = next_to_use; do { if (!mutex_trylock(&result->mutex)) goto out; result = result->next; } while (result != next_to_use); /* No arena available. Wait for the next in line. */ (void)mutex_lock(&result->mutex); So let's make a couple more assumptions. First, assume that next_to_use refers to the main arena. Second assume that all the other arenas are currently locked by other threads. And remember that the main arena is locked by the current thread. In that case the do-while loop will look at the every arena on the list and find them all locked. When it hits result == main_arena, then the loop will exit and we call mutex_lock to acquire the main arena's lock. But since the main arena is already locked by the current thread, we deadlock. libc_calloc seems to have the same problem. libc_memalign, libc_calloc, libc_pvalloc all release the lock before calling arena_get2 in the case where sbrk failed, which seems to be the right thing to do. Closely related since fixing the deadlock provides us with a great opportunity to unify the 5 implementations a little. For example we have this from libc_memalign: mstate prev = ar_ptr->next ? ar_ptr : 0; (void)mutex_unlock(&ar_ptr->mutex); ar_ptr = arena_get2(prev, bytes); Which seems like the right way to go, so I'd like to unify the 5 implementations to have a similar structure. ie, release the lock within the conditional just prior to calling arena_get2 and being safe WRT modification of ar_ptr->next. So dropping the lock prior to calling arena_get2 avoids the deadlock. However, in this case all that's going to happen is the retrying allocation will fail. The original allocation was trying to allocate from the main arena and so will the retrying allocation if it can't acquire a lock for any other arena. To get this 100% correct would require retrying in every arena, potentially blocking to acquire the lock at each step. That seems rather excessive. So instead we can just pass in a state variable indicating we're in a retry situation and if so, avoid retrying in the same arena that just failed. Attached is the fix we're currently using internally at Red Hat. It needed updating slightly as Uli cleaned up malloc.c & arena.c. However, the basic structure is the same as what Red Hat is using internally. I've verified glibc still builds after adjusting for Uli's cleanups. -- Configure bugmail: http://sourceware.org/bugzilla/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug.
next reply other threads:[~2012-04-03 3:15 UTC|newest] Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top 2012-04-03 3:15 law at redhat dot com [this message] 2012-04-03 3:46 ` [Bug malloc/13939] " carlos_odonell at mentor dot com 2012-04-03 7:46 ` aj at suse dot de 2012-04-03 19:33 ` ppluzhnikov at google dot com 2012-06-22 21:16 ` roland at gnu dot org 2012-06-23 11:28 ` pluto at agmk dot net 2012-07-18 2:00 ` carlos_odonell at mentor dot com 2012-07-18 14:48 ` law at redhat dot com 2012-11-29 17:38 ` law at redhat dot com 2014-06-25 11:23 ` fweimer at redhat dot com
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=bug-13939-131@http.sourceware.org/bugzilla/ \ --to=sourceware-bugzilla@sourceware.org \ --cc=glibc-bugs@sources.redhat.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).