From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by sourceware.org (Postfix) with ESMTPS id EED813858D33 for ; Thu, 9 Mar 2023 10:54:49 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org EED813858D33 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1678359289; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=mwpmlvy+w2giwNyuRdbzCK3xEoGN5uaMSakOoAkg9ok=; b=G4Prn/MuwfQC/CU/r/fLSgtZ4dRUCv/CqayAP0chZfLEB21+BUOnr+SIy2/9m5dEJrInuA fjV6yuLehGymlCXEJaewMjR+379IHGmvQE7BbXinfehoiXRR0Ifw0Nj5WuXEMq2En61oF1 WUhi9cdXZGw9jNCpvJXdOi4Hi0aDNdU= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-649-LkfRRj3wPgKtezTZsD0aOQ-1; Thu, 09 Mar 2023 05:54:45 -0500 X-MC-Unique: LkfRRj3wPgKtezTZsD0aOQ-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E04A6811E9C; Thu, 9 Mar 2023 10:54:44 +0000 (UTC) Received: from oldenburg.str.redhat.com (unknown [10.2.16.36]) by smtp.corp.redhat.com (Postfix) with ESMTPS id EA58D492B04; Thu, 9 Mar 2023 10:54:43 +0000 (UTC) From: Florian Weimer To: Cupertino Miranda via Libc-alpha Cc: Cupertino Miranda , "Jose E. Marchesi" , Elena Zannoni , Cupertino Miranda Subject: Re: [RFC] Stack allocation, hugepages and RSS implications References: <87pm9j4azf.fsf@oracle.com> <87mt4n49ak.fsf@oracle.com> Date: Thu, 09 Mar 2023 11:54:42 +0100 In-Reply-To: <87mt4n49ak.fsf@oracle.com> (Cupertino Miranda via Libc-alpha's message of "Wed, 08 Mar 2023 14:17:23 +0000") Message-ID: <87bkl2b3f1.fsf@oldenburg.str.redhat.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/28.2 (gnu/linux) MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain X-Spam-Status: No, score=-4.5 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE,TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: * Cupertino Miranda via Libc-alpha: > Hi everyone, > > For performance purposes, one of ours in-house applications requires to enable > TRANSPARENT_HUGEPAGES_ALWAYS option in linux kernel, actually making the > kernel to force all of the big enough and alligned memory allocations to > reside in hugepages. I believe the reason behind this decision is to > have more control on data location. > > For stack allocation, it seems that hugepages make resident set size > (RSS) increase significantly, and without any apparent benefit, as the > huge page will be split in small pages even before leaving glibc stack > allocation code. > > As an example, this is what happens in case of a pthread_create with 2MB > stack size: > 1. mmap request for the 2MB allocation with PROT_NONE; > a huge page is "registered" by the kernel > 2. the thread descriptor is writen in the end of the stack. > this will trigger a page exception in the kernel which will make the actual > memory allocation of the 2MB. > 3. an mprotect changes protection on the guard (one of the small pages of the > allocated space): > at this point the kernel needs to break the 2MB page into many small pages > in order to change the protection on that memory region. > This will eliminate any benefit of having small pages for stack allocation, > but also makes RSS to be increaded by 2MB even though nothing was > written to most of the small pages. > > As an exercise I added __madvise(..., MADV_NOHUGEPAGE) right after the > __mmap in nptl/allocatestack.c. As expected, RSS was significantly > reduced for the application. Interesting. I did not expect to get hugepages right out of mmap. I would have expected subsequent coalescing by khugepaged, taking actual stack usage into account. But over-allocating memory might be beneficial, see below. (Something must be happening between step 1 & 2 to make the writes possible.) > In any case, I wonder if there is an actual use case where an hugepage would > survive glibc stack allocation and will bring an actual benefit. It can reduce TLB misses. The first-level TLB might only have 64 entries for 4K pages, for example. If the working set on the stack (including the TCB) needs more than a couple of pages, it might beneficial to use a 2M page and use just one TLB entry. In your case, if your stacks are quite small, maybe you can just allocate slightly less than 2 MiB? The other question is whether the reported RSS is real, or if the kernel will recover zero stack pages on memory pressure. Thanks, Florian