public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
From: Qingqing Li <liqingqing3@huawei.com>
To: Adhemerval Zanella <adhemerval.zanella@linaro.org>,
	DJ Delorie <dj@redhat.com>,
	Yang Yanchao <yangyanchao6@huawei.com>
Cc: <carlos@redhat.com>,
	"libc-alpha@sourceware.org" <libc-alpha@sourceware.org>,
	<glebfm@altlinux.org>, <ldv@altlinux.org>,
	<linfeilong@huawei.com>, Qingqing Li <liqingqing3@huawei.com>
Subject: Re: 转发: malloc: Optimize the number of arenas for better application performance
Date: Wed, 29 Jun 2022 10:37:04 +0800	[thread overview]
Message-ID: <4f855db3-d4d5-eb0a-0edf-b7e2a61d6a78@huawei.com> (raw)
In-Reply-To: <1a8f10e034e7489c8e9f090e9c90b396@huawei.com>

>> On 28 Jun 2022, at 15:56, DJ Delorie <dj@redhat.com> wrote:
>>
>> Yang Yanchao <yangyanchao6@huawei.com> writes:
>>> However, my machine is 96 cores and I have 91 cores bound.
>>
>> One benchmark on one uncommon configuration is not sufficient reason to
>> change a core tunable.  What about other platforms?  Other benchmarks?
>> Other percentages of cores scheduled?
>>
>> I would reject this patch based solely on the lack of data backing up
>> your claims.
>>
>>> -              int n = __get_nprocs_sched ();
>>> +              int n = __get_nprocs ();
>>
>> I've heard complaints about how our code leads to hundreds of arenas on
>> processes scheduled on only two CPUs.  I think using the number of
>> *schedulable* cores makes more sense than using the number of *unusable*
>> cores.
>>
>> I think this change warrants more research.
> 
> I think this patch make sense mainly because we changed to use the 
> schedulable cores without much though either.  Maybe we can revert
> to previous semantic and investigate that using the schedulable 
> number makes more sense.
> 
Agreed, the variable narenas_limit just Initialize once, if there has a scenario that we dynamic adjust the cpu affnity,
__get_nprocs_sched is not a good choice. my opinion first use  __get_nprocs as a static valule(the old default behavior),
and user use glibc.TUNABLE to adjust arana number.
Also, we can do more research and test to optimize the default arena number.

  parent reply	other threads:[~2022-06-29  2:37 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-28  9:40 Yang Yanchao
2022-06-28 11:18 ` Florian Weimer
2022-06-28 12:38   ` Siddhesh Poyarekar
2022-06-28 13:35 ` Adhemerval Zanella
2022-06-28 18:56 ` DJ Delorie
2022-06-28 19:17   ` Adhemerval Zanella
     [not found]     ` <1a8f10e034e7489c8e9f090e9c90b396@huawei.com>
2022-06-29  2:37       ` Qingqing Li [this message]
2022-06-29  5:25         ` 转发: " Siddhesh Poyarekar
2022-06-29  8:05           ` [PATCH] malloc: Optimize the number of arenas for better application performance [BZ# 29296] Yang Yanchao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4f855db3-d4d5-eb0a-0edf-b7e2a61d6a78@huawei.com \
    --to=liqingqing3@huawei.com \
    --cc=adhemerval.zanella@linaro.org \
    --cc=carlos@redhat.com \
    --cc=dj@redhat.com \
    --cc=glebfm@altlinux.org \
    --cc=ldv@altlinux.org \
    --cc=libc-alpha@sourceware.org \
    --cc=linfeilong@huawei.com \
    --cc=yangyanchao6@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).