From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oa1-x2d.google.com (mail-oa1-x2d.google.com [IPv6:2001:4860:4864:20::2d]) by sourceware.org (Postfix) with ESMTPS id 63D643858C5E for ; Thu, 9 Mar 2023 17:11:40 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 63D643858C5E Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=linaro.org Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=linaro.org Received: by mail-oa1-x2d.google.com with SMTP id 586e51a60fabf-176261d7f45so2967006fac.11 for ; Thu, 09 Mar 2023 09:11:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1678381899; h=content-transfer-encoding:in-reply-to:organization:from:references :cc:to:content-language:subject:user-agent:mime-version:date :message-id:from:to:cc:subject:date:message-id:reply-to; bh=4EnLDb7QafAc5DiTBNFKF64/wL1SIwxq+iOtBrNZDi8=; b=a5KpdO1hvMZXTNkdaE2PwEuDtThHsopUzCwVNyBwuS6rAfqsd4PseNXJu+l33bksZD toSZ9q/DSTetmbSsJB0P0+qRXMq2g0Qft24y+lrIwq/bXFunufqhfrmbMHcMpnbwp6hQ oFM5EbhbaiqhoJo3ubE4TWCv+d+HumueVrCgPdELxo0EwBhCFeZC6vFMmJMndwTmkp3R bn1Jw8QwaFNfRdA2+itOeQYvS/h5YoSvK+h51lTKGpiUMt0EHl2dp8vw2u3OQh167S// esdl/61zYAobX/cWjp39sime5QoWBORbsKHVsuEyyD5Nt9TsQUG1VExNn/AeY0jfsbw4 zXcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678381899; h=content-transfer-encoding:in-reply-to:organization:from:references :cc:to:content-language:subject:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=4EnLDb7QafAc5DiTBNFKF64/wL1SIwxq+iOtBrNZDi8=; b=K+TJZs9pq0yrj/I3i/QvbmYgnF71PXRJDJxv+Vvaws2eSK2+NBr5wXUdNkTYVf+vPB GhkHNj7DJb7VKhTt2kUrFmozGdCSpTvxGUzVEMMN9fvl+Xt7PU6m9WbnW6u7Ez9jdL61 MT5dtD05DSsgdnd9PjyLz4CZCKoqXIbasOhr7ytXhCFRCVAvReUUYwQTiM/+vDaIOwob XQjyhoczNX+8rVWCr+3+j5EDuseyf+lpY5nwLzlSro5hjW8sit/sqgwZ5dOg/586rEUO WVwqzWMu/jpS3HRJpUMVRxo3aOtQi21DFZ/NQEdZgiCUgyZxtjK0rYdcW7Tpamb4O7hG kA0g== X-Gm-Message-State: AO0yUKXClxJB6/3iEKm4rWU6fz6aT5iHX5lT+2ahNPZ7b67dVVUZ8bT1 JASaxD2ByJagfVJ1YVUxq1JPiw== X-Google-Smtp-Source: AK7set/rFfut2ObqtHDb9C/bVFiq+QeVKimdHpsbZTz590CIY41jrMWVTgIdM+FYk1X804/D/KHxLQ== X-Received: by 2002:a05:6870:d629:b0:16a:ca72:ae85 with SMTP id a41-20020a056870d62900b0016aca72ae85mr13088717oaq.16.1678381898724; Thu, 09 Mar 2023 09:11:38 -0800 (PST) Received: from ?IPV6:2804:1b3:a7c0:544b:655d:5559:758d:90f7? ([2804:1b3:a7c0:544b:655d:5559:758d:90f7]) by smtp.gmail.com with ESMTPSA id q4-20020a4ab3c4000000b005250c840e64sm7418118ooo.3.2023.03.09.09.11.36 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 09 Mar 2023 09:11:38 -0800 (PST) Message-ID: <8f22594a-145a-a358-7ae0-dbbe16d709e8@linaro.org> Date: Thu, 9 Mar 2023 14:11:35 -0300 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.8.0 Subject: Re: [RFC] Stack allocation, hugepages and RSS implications Content-Language: en-US To: Cupertino Miranda Cc: libc-alpha@sourceware.org, "Jose E. Marchesi" , Elena Zannoni , Cupertino Miranda References: <87pm9j4azf.fsf@oracle.com> <87mt4n49ak.fsf@oracle.com> <06a84799-3a73-2bff-e157-281eed68febf@linaro.org> <87edpy464g.fsf@oracle.com> From: Adhemerval Zanella Netto Organization: Linaro In-Reply-To: <87edpy464g.fsf@oracle.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-6.1 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,NICE_REPLY_A,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: On 09/03/23 06:38, Cupertino Miranda wrote: > > Adhemerval Zanella Netto writes: > >> On 08/03/23 11:17, Cupertino Miranda via Libc-alpha wrote: >>> >>> Hi everyone, >>> >>> For performance purposes, one of ours in-house applications requires to enable >>> TRANSPARENT_HUGEPAGES_ALWAYS option in linux kernel, actually making the >>> kernel to force all of the big enough and alligned memory allocations to >>> reside in hugepages. I believe the reason behind this decision is to >>> have more control on data location. >> >> He have, since 2.35, the glibc.malloc.hugetlb tunables, where setting to 1 >> enables MADV_HUGEPAGE madvise for mmap allocated pages if mode is set as >> 'madvise' (/sys/kernel/mm/transparent_hugepage/enabled). One option would >> to use it instead of 'always' and use glibc.malloc.hugetlb=1. >> >> The main drawback of this strategy is this system wide setting, so it >> might affect other user/programs as well. >> >>> >>> For stack allocation, it seems that hugepages make resident set size >>> (RSS) increase significantly, and without any apparent benefit, as the >>> huge page will be split in small pages even before leaving glibc stack >>> allocation code. >>> >>> As an example, this is what happens in case of a pthread_create with 2MB >>> stack size: >>> 1. mmap request for the 2MB allocation with PROT_NONE; >>> a huge page is "registered" by the kernel >>> 2. the thread descriptor is writen in the end of the stack. >>> this will trigger a page exception in the kernel which will make the actual >>> memory allocation of the 2MB. >>> 3. an mprotect changes protection on the guard (one of the small pages of the >>> allocated space): >>> at this point the kernel needs to break the 2MB page into many small pages >>> in order to change the protection on that memory region. >>> This will eliminate any benefit of having small pages for stack allocation, >>> but also makes RSS to be increaded by 2MB even though nothing was >>> written to most of the small pages. >>> >>> As an exercise I added __madvise(..., MADV_NOHUGEPAGE) right after >>> the __mmap in nptl/allocatestack.c. As expected, RSS was significantly reduced for >>> the application. >>> >>> At this point I am very much confident that there is a real benefit in our >>> particular use case to enforce stacks not ever to use hugepages. >>> >>> This RFC is to understand if I have missed some option in glibc that would >>> allow to better control stack allocation. >>> If not, I am tempted to propose/submit a change, in the form of a tunable, to >>> enforce NOHUGEPAGES for stacks. >>> >>> In any case, I wonder if there is an actual use case where an hugepage would >>> survive glibc stack allocation and will bring an actual benefit. >>> >>> Looking forward for your comments. >> >> Maybe also a similar strategy on pthread stack allocation, where if transparent >> hugepages is 'always' and glibc.malloc.hugetlb is 3 we set MADV_NOHUGEPAGE on >> internal mmaps. So value of '3' means disable THP, which might be confusing >> but currently we have '0' as 'use system default'. It can be also another >> tunable, like glibc.hugetlb to decouple from malloc code. >> > The intent would not be to disable hugepages on all internal mmaps, as I > think you said, but rather just do it for stack allocations. > Although more work, I would say if we add this to a tunable then maybe > we should move it from malloc namespace. I was thinking on mmap allocation where internal usage might trigger this behavior. If I understood what is happening, since the initial stack is aligned to the hugepage size (assuming x86 2MB hugepage and 8MB default stack size) and 'always' is set a the policy, the stack will be always backed up by hugepages. And then, when the guard page is set at setup_stack_prot, it will force the kernel to split and move the stack to default pages. It seems to be a pthread specific problem, since I think alloc_new_heap already mprotect if hugepage it is used. And I agree with Florian that backing up thread stack with hugepage it might indeed reduce TLB misses. However, if you want to optimize to RSS maybe you can force the total thread stack size to not be multiple of hugepages: $ cat /sys/kernel/mm/transparent_hugepage/enabled [always] madvise never $ grep -w STACK_SIZE_TOTAL tststackalloc.c #define STACK_SIZE_TOTAL (3 * (HUGE_PAGE_SIZE)) / 4 size_t stack_size = STACK_SIZE_TOTAL; $ ./testrun.sh ./tststackalloc 1 Page size: 4 kB, 2 MB huge pages Will attempt to align allocations to make stacks eligible for huge pages pid: 342503 (/proc/342503/smaps) Creating 128 threads... RSS: 537 pages (2199552 bytes = 2 MB) Press enter to exit... $ ./testrun.sh ./tststackalloc 0 Page size: 4 kB, 2 MB huge pages pid: 342641 (/proc/342641/smaps) Creating 128 threads... RSS: 536 pages (2195456 bytes = 2 MB) Press enter to exit... But I think a tunable to force it for all stack sizes might be useful indeed. > If moving it out of malloc is not Ok for backcompatibility reasons, then > I would say create a new tunable specific for the purpose, like > glibc.stack_nohugetlb ? We don't enforce tunable compatibility, but we have the glibc.pthread namespace already. Maybe we can use glibc.pthread.stack_hugetlb, with 0 to use the default and 1 to avoid by call mprotect (we might change this semantic). > > The more I think about this the less I feel we will ever be able to > practically use hugepages in stacks. We can declare them as such, but > soon enough the kernel would split them in small pages. > >> Ideally it will require to cache the __malloc_thp_mode, so we avoid the non >> required mprotected calls, similar to what we need on malloc do_set_hugetlb >> (it also assumes that once the programs calls the initial malloc, any system >> wide change to THP won't take effect). > Very good point. Did not think about this before.