From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id A8F903858CD1; Fri, 11 Aug 2023 09:24:14 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org A8F903858CD1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1691745854; bh=rO3qobqEDigpP8KDstUyjWVxytpXlwUxbvP3pl7HK3k=; h=From:To:Subject:Date:In-Reply-To:References:From; b=UAf6OeBgtoc5WJK7g95+4vNItYnZrHgQXMkP29iURp20Iu1srb5gwp7NOK50GMAbF y0HdCX83VllUXsLgcrdEsNqYzncbHSUgzhTNi8krVMPwFio8NrscbnVbxFe/KGThNU pHUuU2UU4Fw2lf2m8+4vUYGJOW2UxQOnHMXbw978= From: "fweimer at redhat dot com" To: glibc-bugs@sourceware.org Subject: [Bug malloc/30723] Repeated posix_memalign calls produce long free lists, high fragmentation Date: Fri, 11 Aug 2023 09:24:13 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: glibc X-Bugzilla-Component: malloc X-Bugzilla-Version: 2.38 X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: fweimer at redhat dot com X-Bugzilla-Status: ASSIGNED X-Bugzilla-Resolution: X-Bugzilla-Priority: P2 X-Bugzilla-Assigned-To: fweimer at redhat dot com X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: security- X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://sourceware.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://sourceware.org/bugzilla/show_bug.cgi?id=3D30723 --- Comment #4 from Florian Weimer --- First part committed: commit 542b1105852568c3ebc712225ae78b8c8ba31a78 Author: Florian Weimer Date: Fri Aug 11 11:18:17 2023 +0200 malloc: Enable merging of remainders in memalign (bug 30723) Previously, calling _int_free from _int_memalign could put remainders into the tcache or into fastbins, where they are invisible to the low-level allocator. This results in missed merge opportunities because once these freed chunks become available to the low-level allocator, further memalign allocations (even of the same size are) likely obstructing merges. Furthermore, during forwards merging in _int_memalign, do not completely give up when the remainder is too small to serve as a chunk on its own. We can still give it back if it can be merged with the following unused chunk. This makes it more likely that memalign calls in a loop achieve a compact memory layout, independently of initial heap layout. Drop some useless (unsigned long) casts along the way, and tweak the style to more closely match GNU on changed lines. Reviewed-by: DJ Delorie Second part is still under review, I need to send a v2. --=20 You are receiving this mail because: You are on the CC list for the bug.=