public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
From: Cupertino Miranda <cupertino.miranda@oracle.com>
To: Adhemerval Zanella Netto <adhemerval.zanella@linaro.org>
Cc: libc-alpha@sourceware.org, Florian Weimer <fweimer@redhat.com>,
	jose.marchesi@oracle.com, elena.zannoni@oracle.com
Subject: Re: [PATCH v5 1/1] Created tunable to force small pages on stack allocation.
Date: Thu, 13 Apr 2023 17:13:39 +0100	[thread overview]
Message-ID: <87pm87daks.fsf@oracle.com> (raw)
In-Reply-To: <645838ea-afc0-0289-233e-50fa22f126c1@linaro.org>


Hi Adhemerval,

Sorry for taking a while to reply to you.

I compiled your patch with the particular example that lead me in the
tunable propose, it does not seem to optimize in that particular case.

Looking at the code, I realized that the condition would most likely
never allow for it to be madvised unless you would have a guardsize ==
thpsize, which appears irrealistic.

+  if ((uintptr_t) mem % thpsize != 0           (is not aligned)
+      || size % thpsize != 0                   (1)
+      || (size - guardsize) % thpsize != 0)    (2)
+    return 0;

If (1) is not true then (2) is likely to be.


I aggree that for the scenarios where the hugepages would never be used,
we should advise the kernel not to bloat it.
In any case I am not fully sure you can always predict that.

Lastly, I would say that the tunable would still be usable.
For example, when the application creates sort lived threads but
allocates hugepage size stacks to control allignment.

Regards,
Cupertino


Adhemerval Zanella Netto writes:

> On 12/04/23 05:53, Cupertino Miranda wrote:
>>>
>>> So this patch is LGTM, and I will install this shortly.
>>>
>>> I also discussed on the same call if it would be better to make the m
>>> advise the *default* behavior if the pthread stack usage will always ended
>>> up requiring the kernel to split up to use default pages, i.e:
>>>
>>>   1. THP (/sys/kernel/mm/transparent_hugepage/enabled) is set to
>>>      'always'.
>>>
>>>   2. The stack size is multiple of THP size
>>>      (/sys/kernel/mm/transparent_hugepage/hpage_pmd_size).
>>>
>>>   3. And if stack size minus guard pages is still multiple of THP
>>>      size ((stack_size - guard_size) % thp_size == 0).
>>>
>>> It does not mean that the stack will automatically backup by THP, but
>>> it also means that depending of the process VMA it might generate some
>>> RSS waste once kernel decides to use THP for the stack.  And it should
>>> also make the tunables not required.
>>>
>>> [1] https://sourceware.org/glibc/wiki/PatchworkReviewMeetings
>>> [2] https://bugs.openjdk.org/browse/JDK-8303215?page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel&showAll=true
>>> [3] https://lore.kernel.org/linux-mm/278ec047-4c5d-ab71-de36-094dbed4067c@redhat.com/T/
>
> I implemented my idea above, which should cover the issue you brought without
> the need of the extra tunable.  It seems that if the kernel can not keep track
> of the touch 'subpages' once THP is used on the stack allocation, it should be
> always an improvement to madvise (MADV_NOHUGEPAGE).
>
> What do you think?
>
> ---
>
> [PATCH] nptl: Disable THP on thread stack if it incurs in large RSS usage
>
> If the Transparent Huge Page (THP) is set as 'always', the resulting
> address and the stack size are multiple of THP size, kernel may use
> THP for the thread stack.  However, if the guard page size is not
> multiple of THP, once it is mprotect the allocate range could no
> longer be served with THP and then kernel will revert back using
> default page sizes.
>
> However, the kernel might also not keep track of the offsets within
> the THP that has been touched and need to reside on the memory.  It
> will then keep all the small pages, thus using much more memory than
> required.  In this scenario, it is better to just madvise that not
> use huge pages and avoid the memory bloat.
>
> The __malloc_default_thp_pagesize and __malloc_thp_mode now cache
> the obtained value to avoid require read and parse the kernel
> information on each thread creation (if system change its setting,
> the process will not be able to adjust it).
>
> Checked on x86_64-linux-gnu.
> ---
>  nptl/allocatestack.c                       | 32 +++++++++++++++
>  sysdeps/generic/malloc-hugepages.h         |  1 +
>  sysdeps/unix/sysv/linux/malloc-hugepages.c | 46 ++++++++++++++++++----
>  3 files changed, 72 insertions(+), 7 deletions(-)
>
> diff --git a/nptl/allocatestack.c b/nptl/allocatestack.c
> index c7adbccd6f..d197edf2e9 100644
> --- a/nptl/allocatestack.c
> +++ b/nptl/allocatestack.c
> @@ -33,6 +33,7 @@
>  #include <nptl-stack.h>
>  #include <libc-lock.h>
>  #include <tls-internal.h>
> +#include <malloc-hugepages.h>
>
>  /* Default alignment of stack.  */
>  #ifndef STACK_ALIGN
> @@ -206,6 +207,33 @@ advise_stack_range (void *mem, size_t size, uintptr_t pd, size_t guardsize)
>  #endif
>  }
>
> +/* If the Transparent Huge Page (THP) is set as 'always', the resulting
> +   address and the stack size are multiple of THP size, kernel may use THP for
> +   the thread stack.  However, if the guard page size is not multiple of THP,
> +   once it is mprotect the allocate range could no longer be served with THP
> +   and then kernel will revert back using default page sizes.
> +
> +   However, the kernel might also not keep track of the offsets within the THP
> +   that has been touched and need to reside on the memory.  It will then keep
> +   all the small pages, thus using much more memory than required.  In this
> +   scenario, it is better to just madvise that not use huge pages and avoid
> +   the memory bloat.  */
> +static __always_inline int
> +advise_thp (void *mem, size_t size, size_t guardsize)
> +{
> +  enum malloc_thp_mode_t thpmode = __malloc_thp_mode ();
> +  if (thpmode != malloc_thp_mode_always)
> +    return 0;
> +
> +  unsigned long int thpsize = __malloc_default_thp_pagesize ();
> +  if ((uintptr_t) mem % thpsize != 0
> +      || size % thpsize != 0
> +      || (size - guardsize) % thpsize != 0)
> +    return 0;
> +
> +  return __madvise (mem, size, MADV_NOHUGEPAGE);
> +}
> +
>  /* Returns a usable stack for a new thread either by allocating a
>     new stack or reusing a cached stack of sufficient size.
>     ATTR must be non-NULL and point to a valid pthread_attr.
> @@ -373,6 +401,10 @@ allocate_stack (const struct pthread_attr *attr, struct pthread **pdp,
>  	     So we can never get a null pointer back from mmap.  */
>  	  assert (mem != NULL);
>
> +	  int r = advise_thp (mem, size, guardsize);
> +	  if (r != 0)
> +	    return r;
> +
>  	  /* Place the thread descriptor at the end of the stack.  */
>  #if TLS_TCB_AT_TP
>  	  pd = (struct pthread *) ((((uintptr_t) mem + size)
> diff --git a/sysdeps/generic/malloc-hugepages.h b/sysdeps/generic/malloc-hugepages.h
> index d68b85630c..21d4844bc4 100644
> --- a/sysdeps/generic/malloc-hugepages.h
> +++ b/sysdeps/generic/malloc-hugepages.h
> @@ -26,6 +26,7 @@ unsigned long int __malloc_default_thp_pagesize (void) attribute_hidden;
>
>  enum malloc_thp_mode_t
>  {
> +  malloc_thp_mode_unknown,
>    malloc_thp_mode_always,
>    malloc_thp_mode_madvise,
>    malloc_thp_mode_never,
> diff --git a/sysdeps/unix/sysv/linux/malloc-hugepages.c b/sysdeps/unix/sysv/linux/malloc-hugepages.c
> index 683d68c327..5954dd13f6 100644
> --- a/sysdeps/unix/sysv/linux/malloc-hugepages.c
> +++ b/sysdeps/unix/sysv/linux/malloc-hugepages.c
> @@ -22,19 +22,33 @@
>  #include <not-cancel.h>
>  #include <sys/mman.h>
>
> +/* The __malloc_thp_mode is called only in single-thread mode, either in
> +   malloc initialization or pthread creation.  */
> +static unsigned long int thp_pagesize = -1;
> +
>  unsigned long int
>  __malloc_default_thp_pagesize (void)
>  {
> +  unsigned long int size = atomic_load_relaxed (&thp_pagesize);
> +  if (size != -1)
> +    return size;
> +
>    int fd = __open64_nocancel (
>      "/sys/kernel/mm/transparent_hugepage/hpage_pmd_size", O_RDONLY);
>    if (fd == -1)
> -    return 0;
> +    {
> +      atomic_store_relaxed (&thp_pagesize, 0);
> +      return 0;
> +    }
>
>    char str[INT_BUFSIZE_BOUND (unsigned long int)];
>    ssize_t s = __read_nocancel (fd, str, sizeof (str));
>    __close_nocancel (fd);
>    if (s < 0)
> -    return 0;
> +    {
> +      atomic_store_relaxed (&thp_pagesize, 0);
> +      return 0;
> +    }
>
>    unsigned long int r = 0;
>    for (ssize_t i = 0; i < s; i++)
> @@ -44,16 +58,28 @@ __malloc_default_thp_pagesize (void)
>        r *= 10;
>        r += str[i] - '0';
>      }
> +  atomic_store_relaxed (&thp_pagesize, r);
>    return r;
>  }
>
> +/* The __malloc_thp_mode is called only in single-thread mode, either in
> +   malloc initialization or pthread creation.  */
> +static enum malloc_thp_mode_t thp_mode = malloc_thp_mode_unknown;
> +
>  enum malloc_thp_mode_t
>  __malloc_thp_mode (void)
>  {
> +  enum malloc_thp_mode_t mode = atomic_load_relaxed (&thp_mode);
> +  if (mode != malloc_thp_mode_unknown)
> +    return mode;
> +
>    int fd = __open64_nocancel ("/sys/kernel/mm/transparent_hugepage/enabled",
>  			      O_RDONLY);
>    if (fd == -1)
> -    return malloc_thp_mode_not_supported;
> +    {
> +      atomic_store_relaxed (&thp_mode, malloc_thp_mode_not_supported);
> +      return malloc_thp_mode_not_supported;
> +    }
>
>    static const char mode_always[]  = "[always] madvise never\n";
>    static const char mode_madvise[] = "always [madvise] never\n";
> @@ -66,13 +92,19 @@ __malloc_thp_mode (void)
>    if (s == sizeof (mode_always) - 1)
>      {
>        if (strcmp (str, mode_always) == 0)
> -	return malloc_thp_mode_always;
> +	mode = malloc_thp_mode_always;
>        else if (strcmp (str, mode_madvise) == 0)
> -	return malloc_thp_mode_madvise;
> +	mode = malloc_thp_mode_madvise;
>        else if (strcmp (str, mode_never) == 0)
> -	return malloc_thp_mode_never;
> +	mode = malloc_thp_mode_never;
> +      else
> +	mode = malloc_thp_mode_not_supported;
>      }
> -  return malloc_thp_mode_not_supported;
> +  else
> +    mode = malloc_thp_mode_not_supported;
> +
> +  atomic_store_relaxed (&thp_mode, mode);
> +  return mode;
>  }
>
>  static size_t

  reply	other threads:[~2023-04-13 16:13 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-28 15:22 [PATCH v5 0/1] *** " Cupertino Miranda
2023-03-28 15:22 ` [PATCH v5 1/1] " Cupertino Miranda
2023-04-11 19:56   ` Adhemerval Zanella Netto
2023-04-12  8:53     ` Cupertino Miranda
2023-04-12 14:10       ` Adhemerval Zanella Netto
2023-04-13 16:13         ` Cupertino Miranda [this message]
2023-04-14 11:41       ` Adhemerval Zanella Netto
2023-04-14 12:27         ` Cupertino Miranda
2023-04-14 13:06           ` Adhemerval Zanella Netto
2023-04-14 14:33             ` Cupertino Miranda
2023-04-10  8:59 ` [PING] Re: [PATCH v5 0/1] *** " Cupertino Miranda
2023-04-13 15:43 [PATCH v5 1/1] " Wilco Dijkstra
2023-04-13 16:23 ` Cupertino Miranda
2023-04-13 17:48   ` Adhemerval Zanella Netto
2023-04-14 11:28     ` Cupertino Miranda
2023-04-14 13:24       ` Wilco Dijkstra
2023-04-14 14:49         ` Cupertino Miranda
2023-04-14 15:32           ` Wilco Dijkstra
2023-04-14 16:03             ` Wilco Dijkstra
2023-04-14 16:35               ` Cupertino Miranda
2023-04-14 23:10                 ` Wilco Dijkstra
2023-04-14 16:27             ` Cupertino Miranda

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87pm87daks.fsf@oracle.com \
    --to=cupertino.miranda@oracle.com \
    --cc=adhemerval.zanella@linaro.org \
    --cc=elena.zannoni@oracle.com \
    --cc=fweimer@redhat.com \
    --cc=jose.marchesi@oracle.com \
    --cc=libc-alpha@sourceware.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).