* [PATCH] malloc: Do not use MAP_NORESERVE to allocate heap segments
@ 2022-08-10 6:22 Florian Weimer
2022-08-15 13:14 ` Siddhesh Poyarekar
0 siblings, 1 reply; 2+ messages in thread
From: Florian Weimer @ 2022-08-10 6:22 UTC (permalink / raw)
To: libc-alpha
Address space for heap segments is reserved in a mmap call with
MAP_ANONYMOUS | MAP_PRIVATE and protection flags PROT_NONE. This
reservation does not count against the RSS limit of the process or
system. Backing memory is allocated using mprotect in alloc_new_heap
and grow_heap, and at this point, the allocator expects the kernel
to provide memory (subject to memory overcommit).
The SIGSEGV that might generate due to MAP_NORESERVE (according to
the mmap manual page) does not seem to occur in practice, it's always
SIGKILL from the OOM killer. Even if there is a way that SIGSEGV
could be generated, it is confusing to applications that this only
happens for secondary heaps, not for large mmap-based allocations,
and not for the main arena.
---
malloc/arena.c | 5 +----
malloc/malloc.c | 4 ----
2 files changed, 1 insertion(+), 8 deletions(-)
diff --git a/malloc/arena.c b/malloc/arena.c
index defd25c8a6..074ecbc09f 100644
--- a/malloc/arena.c
+++ b/malloc/arena.c
@@ -559,16 +559,13 @@ new_heap (size_t size, size_t top_pad)
#if HAVE_TUNABLES
if (__glibc_unlikely (mp_.hp_pagesize != 0))
{
- /* MAP_NORESERVE is not used for huge pages because some kernel may
- not reserve the mmap region and a subsequent access may trigger
- a SIGBUS if there is no free pages in the pool. */
heap_info *h = alloc_new_heap (size, top_pad, mp_.hp_pagesize,
mp_.hp_flags);
if (h != NULL)
return h;
}
#endif
- return alloc_new_heap (size, top_pad, GLRO (dl_pagesize), MAP_NORESERVE);
+ return alloc_new_heap (size, top_pad, GLRO (dl_pagesize), 0);
}
/* Grow a heap. size is automatically rounded up to a
diff --git a/malloc/malloc.c b/malloc/malloc.c
index 914052eb69..29fa71b3b2 100644
--- a/malloc/malloc.c
+++ b/malloc/malloc.c
@@ -1110,10 +1110,6 @@ static mchunkptr mremap_chunk(mchunkptr p, size_t new_size);
# define MAP_ANONYMOUS MAP_ANON
#endif
-#ifndef MAP_NORESERVE
-# define MAP_NORESERVE 0
-#endif
-
#define MMAP(addr, size, prot, flags) \
__mmap((addr), (size), (prot), (flags)|MAP_ANONYMOUS|MAP_PRIVATE, -1, 0)
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [PATCH] malloc: Do not use MAP_NORESERVE to allocate heap segments
2022-08-10 6:22 [PATCH] malloc: Do not use MAP_NORESERVE to allocate heap segments Florian Weimer
@ 2022-08-15 13:14 ` Siddhesh Poyarekar
0 siblings, 0 replies; 2+ messages in thread
From: Siddhesh Poyarekar @ 2022-08-15 13:14 UTC (permalink / raw)
To: Florian Weimer, libc-alpha
On 2022-08-10 02:22, Florian Weimer via Libc-alpha wrote:
> Address space for heap segments is reserved in a mmap call with
> MAP_ANONYMOUS | MAP_PRIVATE and protection flags PROT_NONE. This
> reservation does not count against the RSS limit of the process or
> system. Backing memory is allocated using mprotect in alloc_new_heap
> and grow_heap, and at this point, the allocator expects the kernel
> to provide memory (subject to memory overcommit).
>
> The SIGSEGV that might generate due to MAP_NORESERVE (according to
> the mmap manual page) does not seem to occur in practice, it's always
> SIGKILL from the OOM killer. Even if there is a way that SIGSEGV
> could be generated, it is confusing to applications that this only
> happens for secondary heaps, not for large mmap-based allocations,
> and not for the main arena.
>
> ---
> malloc/arena.c | 5 +----
> malloc/malloc.c | 4 ----
> 2 files changed, 1 insertion(+), 8 deletions(-)
LGTM.
Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
>
> diff --git a/malloc/arena.c b/malloc/arena.c
> index defd25c8a6..074ecbc09f 100644
> --- a/malloc/arena.c
> +++ b/malloc/arena.c
> @@ -559,16 +559,13 @@ new_heap (size_t size, size_t top_pad)
> #if HAVE_TUNABLES
> if (__glibc_unlikely (mp_.hp_pagesize != 0))
> {
> - /* MAP_NORESERVE is not used for huge pages because some kernel may
> - not reserve the mmap region and a subsequent access may trigger
> - a SIGBUS if there is no free pages in the pool. */
> heap_info *h = alloc_new_heap (size, top_pad, mp_.hp_pagesize,
> mp_.hp_flags);
> if (h != NULL)
> return h;
> }
> #endif
> - return alloc_new_heap (size, top_pad, GLRO (dl_pagesize), MAP_NORESERVE);
> + return alloc_new_heap (size, top_pad, GLRO (dl_pagesize), 0);
> }
>
> /* Grow a heap. size is automatically rounded up to a
> diff --git a/malloc/malloc.c b/malloc/malloc.c
> index 914052eb69..29fa71b3b2 100644
> --- a/malloc/malloc.c
> +++ b/malloc/malloc.c
> @@ -1110,10 +1110,6 @@ static mchunkptr mremap_chunk(mchunkptr p, size_t new_size);
> # define MAP_ANONYMOUS MAP_ANON
> #endif
>
> -#ifndef MAP_NORESERVE
> -# define MAP_NORESERVE 0
> -#endif
> -
> #define MMAP(addr, size, prot, flags) \
> __mmap((addr), (size), (prot), (flags)|MAP_ANONYMOUS|MAP_PRIVATE, -1, 0)
>
>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2022-08-15 13:14 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-10 6:22 [PATCH] malloc: Do not use MAP_NORESERVE to allocate heap segments Florian Weimer
2022-08-15 13:14 ` Siddhesh Poyarekar
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).