From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from proxmox-new.maurer-it.com (proxmox-new.maurer-it.com [94.136.29.106]) by sourceware.org (Postfix) with ESMTPS id AE2133857C51 for ; Tue, 18 Jan 2022 11:32:37 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org AE2133857C51 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=proxmox.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=proxmox.com Received: from proxmox-new.maurer-it.com (localhost.localdomain [127.0.0.1]) by proxmox-new.maurer-it.com (Proxmox) with ESMTP id 15F5E457C6; Tue, 18 Jan 2022 12:32:36 +0100 (CET) Message-ID: Date: Tue, 18 Jan 2022 12:32:34 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:97.0) Gecko/20100101 Thunderbird/97.0 Content-Language: en-US To: libc-help@sourceware.org From: Dominik Csapak Subject: expecations of glibcs (de)allocator Cc: Wolfgang Bumiller Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-3.7 required=5.0 tests=BAYES_00, KAM_DMARC_STATUS, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-help@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-help mailing list List-Unsubscribe: , List-Archive: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Jan 2022 11:32:39 -0000 Hi, i am sorry in advance for the wall of text, but maybe this list can help me, or at least shed some light on issues we have had regarding memory (de)allocation. The setup: we have a long running daemon written in rust for x86_64 Linux (especially Debian, currently bullseye) that uses the default rust malloc/free which is AFAIK glibcs malloc/free (de)allocator. The daemon makes heavy use of the async rust frameworks tokio and hyper. Our problem is the following: during the lifetime of the daemon, there are some memory heavy operations (network traffic/disk io/etc.) and thus it allocates quite a bit of memory, but at the end of the operation, this memory is still allocated to the program (we checked with e.g. htop/ps for the RSS/resident memory) and even letting it run for extended periods of time does not really releases the memory. (We had customers where it retained over 5GiB of memory basically doing nothing). There are some things we tried (e.g. by tuning the options mentioned in mallopt(3)): * calling malloc_trim(0) at the end of the program released the memory (see the reproducer at the end) * Changing M_TRIM_THRESHOLD did not change anything, the memory is still allocated to the program (even setting it to 0 or 1 did not make a difference) This surprised us quite a bit since the documentation reads like this would release the memory sooner and because malloc_trim also released it. * Setting M_MMAP_THRESHOLD option to a very low value fixed the behavior (but not really surprising). (ofc changing the allocator altogether e.g. jemalloc or musl also changed the behaviour, but we'd like to avoid that if possible) We have some small reproducer that trigger this behavior too. (i paste it at the end of the mail). It starts a large number of async tasks, and waits for them to finish, then drops the async runtime completely (at this point the program cannot really have any memory in use, but uses still memory according to htop). to debug we added a 'malloc_trim' at the end which actually releases the memory to the os. So from our side it looks like that either * we (or the frameworks) trigger some bad/worst case in the memory allocation pattern. In this case it would be interesting how we could check/debug that and, if possible, how to fix it in our program * glibcs allocator has some bug regarding releasing memory to the os. while i personally doubt that, it's curious that tuning the M_TRIM_THRESHOLD does not seem to do anything. It would also be interesting how to debug/check that ofc. I hope that this list is not completely wrong, but if it is, just say so (and maybe point me in the right direction) thanks Dominik ---- below is the reproducer (note: uses about 1.4GiB peak memory) ---- use std::io; use std::time::Duration; use tokio::task; extern "C" { fn malloc_trim(pad: libc::size_t) -> i32; } async fn wait(_i: usize) { let delay_in_seconds = Duration::new(2, 0); tokio::time::sleep(delay_in_seconds).await; } fn main() { let rt = tokio::runtime::Runtime::new().unwrap(); rt.block_on(async move { let num = 1_000_000; for i in 0..num { task::spawn(async move { wait(i).await; }); } wait(0).await; wait(0).await; }); println!("all tasks should be finished"); let mut buffer = String::new(); io::stdin().read_line(&mut buffer).expect("error"); drop(rt); println!("dropped runtime"); let mut buffer = String::new(); io::stdin().read_line(&mut buffer).expect("error"); unsafe { malloc_trim(0); }; println!("called malloc_trim"); let mut buffer = String::new(); io::stdin().read_line(&mut buffer).expect("error"); }