public inbox for newlib@sourceware.org
 help / color / mirror / Atom feed
* [PATCH] Nano-malloc: Fix for unwanted external heap fragmentation
@ 2021-04-26  8:57 Ola Olsson
  2021-04-26  9:41 ` Corinna Vinschen
  2021-04-26 15:31 ` Keith Packard
  0 siblings, 2 replies; 12+ messages in thread
From: Ola Olsson @ 2021-04-26  8:57 UTC (permalink / raw)
  To: newlib; +Cc: ola1olsson, Ola Olsson

The only reason why it is tough for us to use nano malloc
is because of the small shortcoming where nano_malloc()
splits a bigger chunk from the free list into two pieces
while handing back the second one (the tail) to the user.
This is error prone and especially bad for smaller heaps,
where nano malloc is supposed to be superior. The normal
malloc doesn't have this issue and we need to use it even
though it costs us ~2k bytes compared to nano-malloc.

The problem arise especially after giving back _every_
malloced memory to the heap and then starting to exercise
the heap again by allocating something small. This small
item might split the whole heap in two equally big parts
depending on how the heap has been exercised before.

I have uploaded the smallest possible application
(only tested on ST and Nordic devices) to show the issue
while the real customer applications are far more complicated:
https://drive.google.com/file/d/1kfSC2KOm3Os3mI7EBd-U0j63qVs8xMbt/view?usp=sharing

The application works like the following pseudo code,
where we assume a heap of 100 bytes
(I haven't taken padding and other nitty and gritty
details into account. Everything to simplify understanding):

void *ptr = malloc(52); // We get 52 bytes and we have
                        // 48 bytes to use.
free(ptr); // Hand back the 52 bytes to nano_malloc
           // This is the magic line that shows the issue of
           // nano_malloc
ptr = malloc(1); // Nano malloc will split the 52 bytes
                 // in the free list and hand you a pointer
                 // somewhere in the
                 // middle of the heap.
ptr2 = malloc(52); // Out of memory...

I have done a fix which hands back the first part of the
splitted chunk. Once this is fixed we obviously
have the 1 byte placed in position 0 of the heap instead
of somewhere in the middle.

However, this won't let us malloc 52 new bytes even though
we potentially have 99 bytes left to use in the heap. The
reason is that when we try to do the allocation,
nano-malloc looks into the free list and sees a 51 byte
chunk to be used.
This is not big enough so nano-malloc decides to call
sbrk for _another_ 52 bytes which is not possible since
there is only 48 bytes left to ask for.

The solution for this problem is to check if the last
item in the free list is adjacent to sbrk(0). If it is,
as it is in this case, we can just ask sbrk for the
remainder of what is needed. In this case 1 byte.

NB! I have only tested the solution on our ST device.
---
 newlib/libc/stdlib/nano-mallocr.c | 70 +++++++++++++++++++++++++++----
 1 file changed, 61 insertions(+), 9 deletions(-)

diff --git a/newlib/libc/stdlib/nano-mallocr.c b/newlib/libc/stdlib/nano-mallocr.c
index 1e0703948..1c7ab390b 100644
--- a/newlib/libc/stdlib/nano-mallocr.c
+++ b/newlib/libc/stdlib/nano-mallocr.c
@@ -264,11 +264,22 @@ void * nano_malloc(RARG malloc_size_t s)
         {
             if (rem >= MALLOC_MINCHUNK)
             {
-                /* Find a chunk that much larger than required size, break
-                * it into two chunks and return the second one */
-                r->size = rem;
-                r = (chunk *)((char *)r + rem);
-                r->size = alloc_size;
+                if (p == r)
+                {
+                    /* First item in the list, break it into two chunks
+                    *  and return the first one */
+                    r->size = alloc_size;
+                    free_list = (chunk *)((char *)r + alloc_size);
+                    free_list->size = rem;
+                    free_list->next = r->next;
+                } else {
+                    /* Any other item in the list. Split and return
+                    * the first one */
+                    r->size = alloc_size;
+                    p->next = (chunk *)((char *)r + alloc_size);
+                    p->next->size = rem;
+                    p->next->next = r->next;
+                }
             }
             /* Find a chunk that is exactly the size or slightly bigger
              * than requested size, just return this chunk */
@@ -297,11 +308,52 @@ void * nano_malloc(RARG malloc_size_t s)
         /* sbrk returns -1 if fail to allocate */
         if (r == (void *)-1)
         {
-            RERRNO = ENOMEM;
-            MALLOC_UNLOCK;
-            return NULL;
+            /* sbrk didn't have the requested amount. Let's check
+             * if the last item in the free list is adjacent to the 
+             * current heap end (sbrk(0)). In that case, only ask
+             * for the difference in size and merge them */
+            p = free_list;
+            r = p;
+
+            while (r)
+            {
+                p=r;
+                r=r->next;
+            }
+
+            if ((char *)p + p->size == (char *)_SBRK_R(RCALL 0))
+            {
+               /* The last free item has the heap end as neighbour.
+                * Let's ask for a smaller amount and merge */
+               alloc_size -= p->size;
+               alloc_size = ALIGN_SIZE(alloc_size, CHUNK_ALIGN); /* size of aligned data load */
+               alloc_size += MALLOC_PADDING; /* padding */
+               alloc_size += CHUNK_OFFSET; /* size of chunk head */
+               alloc_size = MAX(alloc_size, MALLOC_MINCHUNK);
+
+               if (sbrk_aligned(RCALL alloc_size) != (void *)-1)
+               {
+                   p->size += alloc_size;
+                   r = p;
+               }
+               else
+               {
+                   RERRNO = ENOMEM;
+                   MALLOC_UNLOCK;
+                   return NULL;
+               }
+            }
+            else
+            {
+                RERRNO = ENOMEM;
+                MALLOC_UNLOCK;
+                return NULL;
+            }
+        }
+        else
+        {
+            r->size = alloc_size;
         }
-        r->size = alloc_size;
     }
     MALLOC_UNLOCK;
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2021-05-03 11:02 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-26  8:57 [PATCH] Nano-malloc: Fix for unwanted external heap fragmentation Ola Olsson
2021-04-26  9:41 ` Corinna Vinschen
2021-04-26 15:31 ` Keith Packard
2021-04-28 16:54   ` Ola Olsson
2021-04-28 18:19     ` Keith Packard
2021-04-28 20:02       ` Ola Olsson
2021-04-28 21:12         ` Keith Packard
2021-04-29  5:11           ` Ola Olsson
2021-04-29 22:09             ` Keith Packard
2021-04-30  2:11             ` Keith Packard
2021-04-30 18:15               ` Ola Olsson
2021-05-03 11:02                 ` Corinna Vinschen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).