public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
From: Cupertino Miranda <cupertino.miranda@oracle.com>
To: Adhemerval Zanella Netto <adhemerval.zanella@linaro.org>
Cc: libc-alpha@sourceware.org, Florian Weimer <fweimer@redhat.com>,
	jose.marchesi@oracle.com, elena.zannoni@oracle.com
Subject: Re: [PATCH v5 1/1] Created tunable to force small pages on stack allocation.
Date: Wed, 12 Apr 2023 09:53:42 +0100	[thread overview]
Message-ID: <873555cwh5.fsf@oracle.com> (raw)
In-Reply-To: <8f313a5d-f16a-d682-1d78-f216c446099f@linaro.org>


Hi Adhemerval, everyone,

Thanks the approval, detailed analysis and time spent on the topic.

Best regards,
Cupertino

Adhemerval Zanella Netto writes:

> On 28/03/23 12:22, Cupertino Miranda via Libc-alpha wrote:
>> Created tunable glibc.pthread.stack_hugetlb to control when hugepages
>> can be used for stack allocation.
>> In case THP are enabled and glibc.pthread.stack_hugetlb is set to
>> 0, glibc will madvise the kernel not to use allow hugepages for stack
>> allocations.
>>
>> Changed from v1:
>>  - removed the __malloc_thp_mode calls to check if hugetlb is
>>    enabled.
>>
>> Changed from v2:
>>  - Added entry in manual/tunables.texi
>>  - Fixed tunable default to description
>>  - Code style corrections.
>>
>> Changes from v3:
>>  - Improve tunables.texi.
>>
>> Changes from v4:
>>  - Improved text in tunables.texi by suggestion of Adhemerval.
>
> Florian has raised some concern [1] that reported RSS increase is not
> technically correct because the once the kernel need to split the Huge
> Page, it does not need keep all of them (only the one that actually
> generate the soft fault).
>
> However this is not what I see using the previous testcase that creates
> lot of threads to force the THP usage and checking the
> /proc/self/smaps_rollout.  The resulting 'Private_Dirty' still accounts
> for *all* the default smalls pages once kernel decides to split the
> page, and it seems to be same outcome from a recent OpenJDK thread [2].
>
> Afaiu the kernel does not keep track which possible small pages from the
> THP has been already hit when the guard page is mprotect (which forces
> the split), so when the kernel reverts back to using default pages it
> keeps all the pages.  This is also a recent kernel discussion which
> similar conclusion [3].
>
> So this patch is LGTM, and I will install this shortly.
>
> I also discussed on the same call if it would be better to make the m
> advise the *default* behavior if the pthread stack usage will always ended
> up requiring the kernel to split up to use default pages, i.e:
>
>   1. THP (/sys/kernel/mm/transparent_hugepage/enabled) is set to
>      'always'.
>
>   2. The stack size is multiple of THP size
>      (/sys/kernel/mm/transparent_hugepage/hpage_pmd_size).
>
>   3. And if stack size minus guard pages is still multiple of THP
>      size ((stack_size - guard_size) % thp_size == 0).
>
> It does not mean that the stack will automatically backup by THP, but
> it also means that depending of the process VMA it might generate some
> RSS waste once kernel decides to use THP for the stack.  And it should
> also make the tunables not required.
>
> [1] https://sourceware.org/glibc/wiki/PatchworkReviewMeetings
> [2] https://bugs.openjdk.org/browse/JDK-8303215?page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel&showAll=true
> [3] https://lore.kernel.org/linux-mm/278ec047-4c5d-ab71-de36-094dbed4067c@redhat.com/T/
>
>> ---
>>  manual/tunables.texi          | 15 +++++++++++++++
>>  nptl/allocatestack.c          |  6 ++++++
>>  nptl/nptl-stack.c             |  1 +
>>  nptl/nptl-stack.h             |  3 +++
>>  nptl/pthread_mutex_conf.c     |  8 ++++++++
>>  sysdeps/nptl/dl-tunables.list |  6 ++++++
>>  6 files changed, 39 insertions(+)
>>
>> diff --git a/manual/tunables.texi b/manual/tunables.texi
>> index 70dd2264c5..130f94b2bc 100644
>> --- a/manual/tunables.texi
>> +++ b/manual/tunables.texi
>> @@ -459,6 +459,21 @@ registration on behalf of the application.
>>  Restartable sequences are a Linux-specific extension.
>>  @end deftp
>>
>> +@deftp Tunable glibc.pthread.stack_hugetlb
>> +This tunable controls whether to use Huge Pages in the stacks created by
>> +@code{pthread_create}.  This tunable only affects the stacks created by
>> +@theglibc{}, it has no effect on stack assigned with
>> +@code{pthread_attr_setstack}.
>> +
>> +The default is @samp{1} where the system default value is used.  Setting
>> +its value to @code{0} enables the use of @code{madvise} with
>> +@code{MADV_NOHUGEPAGE} after stack creation with @code{mmap}.
>> +
>> +This is a memory utilization optimization, since internal glibc setup of either
>> +the thread descriptor and the guard page might force the kernel to move the
>> +thread stack originally backup by Huge Pages to default pages.
>> +@end deftp
>> +
>>  @node Hardware Capability Tunables
>>  @section Hardware Capability Tunables
>>  @cindex hardware capability tunables
>> diff --git a/nptl/allocatestack.c b/nptl/allocatestack.c
>> index c7adbccd6f..f9d8cdfd08 100644
>> --- a/nptl/allocatestack.c
>> +++ b/nptl/allocatestack.c
>> @@ -369,6 +369,12 @@ allocate_stack (const struct pthread_attr *attr, struct pthread **pdp,
>>  	  if (__glibc_unlikely (mem == MAP_FAILED))
>>  	    return errno;
>>
>> +	  /* Do madvise in case the tunable glibc.pthread.stack_hugetlb is
>> +	     set to 0, disabling hugetlb.  */
>> +	  if (__glibc_unlikely (__nptl_stack_hugetlb == 0)
>> +	      && __madvise (mem, size, MADV_NOHUGEPAGE) != 0)
>> +	    return errno;
>> +
>>  	  /* SIZE is guaranteed to be greater than zero.
>>  	     So we can never get a null pointer back from mmap.  */
>>  	  assert (mem != NULL);
>> diff --git a/nptl/nptl-stack.c b/nptl/nptl-stack.c
>> index 5eb7773575..e829711cb5 100644
>> --- a/nptl/nptl-stack.c
>> +++ b/nptl/nptl-stack.c
>> @@ -21,6 +21,7 @@
>>  #include <pthreadP.h>
>>
>>  size_t __nptl_stack_cache_maxsize = 40 * 1024 * 1024;
>> +int32_t __nptl_stack_hugetlb = 1;
>>
>>  void
>>  __nptl_stack_list_del (list_t *elem)
>> diff --git a/nptl/nptl-stack.h b/nptl/nptl-stack.h
>> index 34f8bbb15e..cf90b27c2b 100644
>> --- a/nptl/nptl-stack.h
>> +++ b/nptl/nptl-stack.h
>> @@ -27,6 +27,9 @@
>>  /* Maximum size of the cache, in bytes.  40 MiB by default.  */
>>  extern size_t __nptl_stack_cache_maxsize attribute_hidden;
>>
>> +/* Should allow stacks to use hugetlb. (1) is default.  */
>> +extern int32_t __nptl_stack_hugetlb;
>> +
>>  /* Check whether the stack is still used or not.  */
>>  static inline bool
>>  __nptl_stack_in_use (struct pthread *pd)
>> diff --git a/nptl/pthread_mutex_conf.c b/nptl/pthread_mutex_conf.c
>> index 329c4cbb8f..60ef9095aa 100644
>> --- a/nptl/pthread_mutex_conf.c
>> +++ b/nptl/pthread_mutex_conf.c
>> @@ -45,6 +45,12 @@ TUNABLE_CALLBACK (set_stack_cache_size) (tunable_val_t *valp)
>>    __nptl_stack_cache_maxsize = valp->numval;
>>  }
>>
>> +static void
>> +TUNABLE_CALLBACK (set_stack_hugetlb) (tunable_val_t *valp)
>> +{
>> +  __nptl_stack_hugetlb = (int32_t) valp->numval;
>> +}
>> +
>>  void
>>  __pthread_tunables_init (void)
>>  {
>> @@ -52,5 +58,7 @@ __pthread_tunables_init (void)
>>                 TUNABLE_CALLBACK (set_mutex_spin_count));
>>    TUNABLE_GET (stack_cache_size, size_t,
>>                 TUNABLE_CALLBACK (set_stack_cache_size));
>> +  TUNABLE_GET (stack_hugetlb, int32_t,
>> +	       TUNABLE_CALLBACK (set_stack_hugetlb));
>>  }
>>  #endif
>> diff --git a/sysdeps/nptl/dl-tunables.list b/sysdeps/nptl/dl-tunables.list
>> index bd1ddb121d..4cde9500b6 100644
>> --- a/sysdeps/nptl/dl-tunables.list
>> +++ b/sysdeps/nptl/dl-tunables.list
>> @@ -33,5 +33,11 @@ glibc {
>>        maxval: 1
>>        default: 1
>>      }
>> +    stack_hugetlb {
>> +      type: INT_32
>> +      minval: 0
>> +      maxval: 1
>> +      default: 1
>> +    }
>>    }
>>  }

  reply	other threads:[~2023-04-12  8:53 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-28 15:22 [PATCH v5 0/1] *** " Cupertino Miranda
2023-03-28 15:22 ` [PATCH v5 1/1] " Cupertino Miranda
2023-04-11 19:56   ` Adhemerval Zanella Netto
2023-04-12  8:53     ` Cupertino Miranda [this message]
2023-04-12 14:10       ` Adhemerval Zanella Netto
2023-04-13 16:13         ` Cupertino Miranda
2023-04-14 11:41       ` Adhemerval Zanella Netto
2023-04-14 12:27         ` Cupertino Miranda
2023-04-14 13:06           ` Adhemerval Zanella Netto
2023-04-14 14:33             ` Cupertino Miranda
2023-04-10  8:59 ` [PING] Re: [PATCH v5 0/1] *** " Cupertino Miranda
2023-04-13 15:43 [PATCH v5 1/1] " Wilco Dijkstra
2023-04-13 16:23 ` Cupertino Miranda
2023-04-13 17:48   ` Adhemerval Zanella Netto
2023-04-14 11:28     ` Cupertino Miranda
2023-04-14 13:24       ` Wilco Dijkstra
2023-04-14 14:49         ` Cupertino Miranda
2023-04-14 15:32           ` Wilco Dijkstra
2023-04-14 16:03             ` Wilco Dijkstra
2023-04-14 16:35               ` Cupertino Miranda
2023-04-14 23:10                 ` Wilco Dijkstra
2023-04-14 16:27             ` Cupertino Miranda

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=873555cwh5.fsf@oracle.com \
    --to=cupertino.miranda@oracle.com \
    --cc=adhemerval.zanella@linaro.org \
    --cc=elena.zannoni@oracle.com \
    --cc=fweimer@redhat.com \
    --cc=jose.marchesi@oracle.com \
    --cc=libc-alpha@sourceware.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).