public inbox for libc-help@sourceware.org
 help / color / mirror / Atom feed
* Double free check in malloc.c for 64 bit architectures
@ 2019-11-20  0:40 Claude Zou
  2019-11-20  8:49 ` Florian Weimer
  0 siblings, 1 reply; 4+ messages in thread
From: Claude Zou @ 2019-11-20  0:40 UTC (permalink / raw)
  To: libc-help

[-- Attachment #1: Type: text/plain, Size: 513 bytes --]

Hi,

I was looking at malloc.c, and noticed that SIZE_BITS potentially has
extra space in 64 bit architectures - specifically, the 8 bit.

I wrote up a quick patch that uses that bit to check for double frees
for fastbins more effectively than the current check. Would this be a
feasible addition to glibc? This is my first time doing something like
this, so I apologize if I'm doing something wrong.

Anyways, I've attached a diff with my proposed changes - please let me
know what you think!

Sincerely,
Claude

[-- Attachment #2: double_free_check.diff --]
[-- Type: application/octet-stream, Size: 3733 bytes --]

diff --git a/malloc/malloc.c b/malloc/malloc.c
index 343d89f489..3f86b9e9ac 100644
--- a/malloc/malloc.c
+++ b/malloc/malloc.c
@@ -1241,6 +1241,12 @@ checked_request2size (size_t req, size_t *sz) __nonnull (1)
 /* Mark a chunk as not being on the main arena.  */
 #define set_non_main_arena(p) ((p)->mchunk_size |= NON_MAIN_ARENA)
 
+/* size field is or'ed with PREV_FAST_FREED when previous adjacent chunk 
+   is a freed fastbin chunk. */
+#define PREV_FAST_FREED 0x8
+
+/* extract freed bit of previous chunk */
+#define prev_fast_freed(p)  ((p)->mchunk_size & PREV_FAST_FREED)
 
 /*
    Bits to mask off when extracting size
@@ -1250,7 +1256,7 @@ checked_request2size (size_t req, size_t *sz) __nonnull (1)
    cause helpful core dumps to occur if it is tried by accident by
    people extending or adapting this malloc.
  */
-#define SIZE_BITS (PREV_INUSE | IS_MMAPPED | NON_MAIN_ARENA)
+#define SIZE_BITS (PREV_INUSE | IS_MMAPPED | NON_MAIN_ARENA | PREV_FAST_FREED)
 
 /* Get size, ignoring use bits */
 #define chunksize(p) (chunksize_nomask (p) & ~(SIZE_BITS))
@@ -1295,6 +1301,15 @@ checked_request2size (size_t req, size_t *sz) __nonnull (1)
 #define clear_inuse_bit_at_offset(p, s)					      \
   (((mchunkptr) (((char *) (p)) + (s)))->mchunk_size &= ~(PREV_INUSE))
 
+/* check/set/clear fastbin freed bits in known places */
+#define fast_freed_bit_at_offset(p, s)					      \
+  (((mchunkptr) (((char *) (p)) + (s)))->mchunk_size & PREV_FAST_FREED)
+
+#define set_fast_freed_bit_at_offset(p, s)					      \
+  (((mchunkptr) (((char *) (p)) + (s)))->mchunk_size |= PREV_FAST_FREED)
+
+#define clear_fast_freed_bit_at_offset(p, s)					      \
+  (((mchunkptr) (((char *) (p)) + (s)))->mchunk_size &= ~(PREV_FAST_FREED))
 
 /* Set size at head, without disturbing its use bit */
 #define set_head_size(p, s)  ((p)->mchunk_size = (((p)->mchunk_size & SIZE_BITS) | (s)))
@@ -3590,9 +3605,13 @@ _int_malloc (mstate av, size_t bytes)
     if (__glibc_likely (victim != NULL))
       {
         size_t victim_idx = fastbin_index (chunksize (victim));
+        mchunkptr nextchunk = chunk_at_offset(victim, chunksize (victim));
         if (__builtin_expect (victim_idx != idx, 0))
           malloc_printerr ("malloc(): memory corruption (fast)");
+        if (__glibc_unlikely (!prev_fast_freed(nextchunk)))
+          malloc_printerr ("malloc(): corrupt prev fast freed (fast)");
         check_remalloced_chunk (av, victim, nb);
+        clear_fast_freed_bit_at_offset (nextchunk, 0);
 #if USE_TCACHE
         /* While we're here, if we see other chunks of the same size,
      stash them in the tcache.  */
@@ -3614,6 +3633,7 @@ _int_malloc (mstate av, size_t bytes)
           break;
       }
           tcache_put (tc_victim, tc_idx);
+          clear_fast_freed_bit_at_offset (tc_victim, nb);
         }
     }
 #endif
@@ -4286,6 +4306,15 @@ _int_free (mstate av, mchunkptr p, int have_lock)
     if (have_lock && old != NULL
   && __builtin_expect (fastbin_index (chunksize (old)) != idx, 0))
       malloc_printerr ("invalid fastbin entry (free)");
+    
+    nextchunk = chunk_at_offset(p, size);
+
+    /* Check that PREV_FAST_FREED is not set. */
+    if (__glibc_unlikely (prev_fast_freed(nextchunk)))
+      malloc_printerr ("double free or corruption (fast)");
+    
+    /* Set PREV_FAST_FREED. */
+    set_fast_freed_bit_at_offset(nextchunk, 0);
   }
 
   /*
@@ -4500,8 +4529,10 @@ static void malloc_consolidate(mstate av)
     if (!nextinuse) {
       size += nextsize;
       unlink_chunk (av, nextchunk);
-	  } else
+    } else {
       clear_inuse_bit_at_offset(nextchunk, 0);
+      clear_fast_freed_bit_at_offset(nextchunk, 0);
+    }
 
     first_unsorted = unsorted_bin->fd;
     unsorted_bin->fd = p;

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Double free check in malloc.c for 64 bit architectures
  2019-11-20  0:40 Double free check in malloc.c for 64 bit architectures Claude Zou
@ 2019-11-20  8:49 ` Florian Weimer
  2019-11-20 19:04   ` Claude Zou
  0 siblings, 1 reply; 4+ messages in thread
From: Florian Weimer @ 2019-11-20  8:49 UTC (permalink / raw)
  To: Claude Zou; +Cc: libc-help

* Claude Zou:

> I was looking at malloc.c, and noticed that SIZE_BITS potentially has
> extra space in 64 bit architectures - specifically, the 8 bit.
>
> I wrote up a quick patch that uses that bit to check for double frees
> for fastbins more effectively than the current check. Would this be a
> feasible addition to glibc?

Fastbins do not use the arena lock.  This means that if you write to
chunk headers on fastbin paths, you risk losing other updates to the
chunk header which change the PREV_INUSE bit.

We would have to use a marker value instead, similar to what we do for
tcache.

Thanks,
Florian

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Double free check in malloc.c for 64 bit architectures
  2019-11-20  8:49 ` Florian Weimer
@ 2019-11-20 19:04   ` Claude Zou
  2019-11-20 19:26     ` Florian Weimer
  0 siblings, 1 reply; 4+ messages in thread
From: Claude Zou @ 2019-11-20 19:04 UTC (permalink / raw)
  To: Florian Weimer; +Cc: libc-help

That makes sense - I didn't think about the arena lock.

Is there any particular reason such a check (such as the one currently
in tcache) has not been done for fastbins yet?

On Wed, Nov 20, 2019 at 3:48 AM Florian Weimer <fweimer@redhat.com> wrote:
>
> * Claude Zou:
>
> > I was looking at malloc.c, and noticed that SIZE_BITS potentially has
> > extra space in 64 bit architectures - specifically, the 8 bit.
> >
> > I wrote up a quick patch that uses that bit to check for double frees
> > for fastbins more effectively than the current check. Would this be a
> > feasible addition to glibc?
>
> Fastbins do not use the arena lock.  This means that if you write to
> chunk headers on fastbin paths, you risk losing other updates to the
> chunk header which change the PREV_INUSE bit.
>
> We would have to use a marker value instead, similar to what we do for
> tcache.
>
> Thanks,
> Florian
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Double free check in malloc.c for 64 bit architectures
  2019-11-20 19:04   ` Claude Zou
@ 2019-11-20 19:26     ` Florian Weimer
  0 siblings, 0 replies; 4+ messages in thread
From: Florian Weimer @ 2019-11-20 19:26 UTC (permalink / raw)
  To: Claude Zou; +Cc: libc-help

* Claude Zou:

> Is there any particular reason such a check (such as the one currently
> in tcache) has not been done for fastbins yet?

I'm not aware of any particular reason.  The implementation would be
quite similar to the current tcache check—it should be safe to traverse
the list as long as the arena lock is acquired.  But the list is
potentially unbound, so it may make sense to consolidate the fastbin, or
at least its tail.

Thanks,
Florian

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-11-20 19:26 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-20  0:40 Double free check in malloc.c for 64 bit architectures Claude Zou
2019-11-20  8:49 ` Florian Weimer
2019-11-20 19:04   ` Claude Zou
2019-11-20 19:26     ` Florian Weimer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).