public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
* [PATCH] realloc: Limit chunk reuse to only growing requests [BZ #30579]
@ 2023-07-04 18:24 Siddhesh Poyarekar
  2023-07-05  7:08 ` Nicolas Dusart
  2023-07-05 11:55 ` Aurelien Jarno
  0 siblings, 2 replies; 6+ messages in thread
From: Siddhesh Poyarekar @ 2023-07-04 18:24 UTC (permalink / raw)
  To: libc-alpha; +Cc: Nicolas Dusart, Aurelien Jarno

The trim_threshold is too aggressive a heuristic to decide if chunk
reuse is OK for reallocated memory; for repeated small, shrinking
allocations it leads to internal fragmentation and for repeated larger
allocations that fragmentation may blow up even worse due to the dynamic
nature of the threshold.

Limit reuse only when it is within the alignment padding, which is 2 *
size_t for heap allocations and a page size for mmapped allocations.
There's the added wrinkle of THP, but this fix ignores it for now,
pessimizing that case in favor of keeping fragmentation low.

This resolves BZ #30579.

Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
Reported-by: Nicolas Dusart <nicolas@freedelity.be>
Reported-by: Aurelien Jarno <aurelien@aurel32.net>
---

The test case in the bz seems fixed with this, bringing VSZ and RSS back
to ~40M from ~1G.  Aurelien, can you please test with plasma desktop?

Thanks,
Sid


 malloc/malloc.c | 23 +++++++++++++++--------
 1 file changed, 15 insertions(+), 8 deletions(-)

diff --git a/malloc/malloc.c b/malloc/malloc.c
index b8c0f4f580..e2f1a615a4 100644
--- a/malloc/malloc.c
+++ b/malloc/malloc.c
@@ -3417,16 +3417,23 @@ __libc_realloc (void *oldmem, size_t bytes)
   if (__glibc_unlikely (mtag_enabled))
     *(volatile char*) oldmem;
 
-  /* Return the chunk as is whenever possible, i.e. there's enough usable space
-     but not so much that we end up fragmenting the block.  We use the trim
-     threshold as the heuristic to decide the latter.  */
-  size_t usable = musable (oldmem);
-  if (bytes <= usable
-      && (unsigned long) (usable - bytes) <= mp_.trim_threshold)
-    return oldmem;
-
   /* chunk corresponding to oldmem */
   const mchunkptr oldp = mem2chunk (oldmem);
+
+  /* Return the chunk as is if the request grows within usable bytes, typically
+     into the alignment padding.  We want to avoid reusing the block for
+     shrinkages because it ends up unnecessarily fragmenting the address space.
+     This is also why the heuristic misses alignment padding for THP for
+     now.  */
+  size_t usable = musable (oldmem);
+  if (bytes <= usable)
+    {
+      size_t difference = usable - bytes;
+      if ((unsigned long) difference < 2 * sizeof (INTERNAL_SIZE_T)
+	  || (chunk_is_mmapped (oldp) && difference <= GLRO (dl_pagesize)))
+	return oldmem;
+    }
+
   /* its size */
   const INTERNAL_SIZE_T oldsize = chunksize (oldp);
 
-- 
2.41.0


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2023-07-06 10:33 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-04 18:24 [PATCH] realloc: Limit chunk reuse to only growing requests [BZ #30579] Siddhesh Poyarekar
2023-07-05  7:08 ` Nicolas Dusart
2023-07-05 10:46   ` Siddhesh Poyarekar
2023-07-05 11:55 ` Aurelien Jarno
2023-07-05 14:37   ` Siddhesh Poyarekar
2023-07-05 18:30     ` Aurelien Jarno

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).