public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
* Re: [PATCH] Add Safe-Linking to fastbins and tcache
@ 2020-03-19 13:32 Wilco Dijkstra
  2020-03-19 15:09 ` Eyal Itkin
  0 siblings, 1 reply; 33+ messages in thread
From: Wilco Dijkstra @ 2020-03-19 13:32 UTC (permalink / raw)
  To: 'GNU C Library', eyal.itkin

Hi,

I tried applying your patch - there seem to be issues with formatting:

@@ -327,6 +327,15 @@ __malloc_assert (const char *assertion, const
char *file, unsigned int line,
 # define MAX_TCACHE_COUNT UINT16_MAX

and

+#define PROTECT_PTR(pos, ptr, type)     ((type)((((size_t)pos) >>
PAGE_SHIFT) ^ ((size_t)ptr)))

It seems something cut off those lines as being too long...

+          if (__glibc_unlikely(!aligned_OK(p)))

There should be spaces before each '('. See the GLIBC coding style:
https://www.gnu.org/prep/standards/standards.html#Formatting

Did you run the GLIBC malloc benchmarks before/after this change?

Cheers,
Wilco

^ permalink raw reply	[flat|nested] 33+ messages in thread
* [PATCH] Add Safe-Linking to fastbins and tcache
@ 2020-02-02  9:43 Eyal Itkin
  2020-02-03 18:10 ` Carlos O'Donell
  0 siblings, 1 reply; 33+ messages in thread
From: Eyal Itkin @ 2020-02-02  9:43 UTC (permalink / raw)
  To: libc-alpha

Safe-Linking is a security mechanism that protects single-linked
lists (such as the fastbin and tcache) from being tampered by attackers.
The mechanism makes use of randomness from ASLR (mmap_base), and
when combined with chunk alignment integrity checks, it protects the
pointers from being hijacked by an attacker.

While Safe-Unlinking protects double-linked lists (such as the small
bins), there wasn't any similar protection for attacks against
single-linked lists. This solution protects against 3 common attacks:
  * Partial pointer override: modifies the lower bytes (Little Endian)
  * Full pointer override: hijacks the pointer to an attacker's location
  * Unaligned chunks: pointing the list to an unaligned address

The design assumes an attacker doesn't know where the heap is located,
and uses the ASLR randomness to "sign" the single-linked pointers. We
mark the pointer as P and the location in which it is stored as L, and
the calculation will be:
  * PROTECT(P) := (L >> PAGE_SHIFT) XOR (P)
  * *L = PROTECT(P)

This way, the random bits from the address L (which start at the bit
in the PAGE_SHIFT position), will be merged with LSB of the stored
protected pointer. This protection layer prevents an attacker from
modifying the pointer into a controlled value.

An additional check that the chunks are MALLOC_ALIGNed adds an
important layer:
  * Attackers can't point to illegal (unaligned) memory addresses
  * Attackers must guess correctly the alignment bits

On standard 32 bit Linux machines, an attack will directly fail 7
out of 8 times, and on 64 bit machines it will fail 15 out of 16
times.

This proposed patch was benchmarked and it's effect on the overall
performance of the heap was negligible and couldn't be distinguished
from the default variance between tests on the vanilla version. A
similar protection was added to Chromium's version of TCMalloc
in 2013, and according to their documentation it had an overhead of
less than 2%.

For more information, please read out White Paper which can be
found here:
https://github.com/gperftools/gperftools/files/4023520/Safe-Linking-White-Paper.txt

---
 malloc/malloc.c | 61 ++++++++++++++++++++++++++++++++++++++-----------
 1 file changed, 48 insertions(+), 13 deletions(-)

diff --git a/malloc/malloc.c b/malloc/malloc.c
index 7d7d30bb13..25a745df9a 100644
--- a/malloc/malloc.c
+++ b/malloc/malloc.c
@@ -327,6 +327,15 @@ __malloc_assert (const char *assertion, const
char *file, unsigned int line,
 # define MAX_TCACHE_COUNT UINT16_MAX
 #endif

+/*
+  Safe-Linking:
+  Use randomness from ASLR (mmap_base) to protect single-linked lists
+  of Fast-Bins and TCache. Together with allocation alignment checks, this
+  mechanism reduces the risk of pointer hijacking, as was done with
+  Safe-Unlinking in the double-linked lists of Small-Bins.
+*/
+#define PROTECT_PTR(pos, ptr, type)     ((type)((((size_t)pos) >>
PAGE_SHIFT) ^ ((size_t)ptr)))
+#define REVEAL_PTR(pos, ptr, type)      PROTECT_PTR(pos, ptr, type)

 /*
   REALLOC_ZERO_BYTES_FREES should be set if a call to
@@ -2157,12 +2166,14 @@ do_check_malloc_state (mstate av)

       while (p != 0)
         {
+          if (__glibc_unlikely(!aligned_OK(p)))
+            malloc_printerr ("do_check_malloc_state(): un-aligned
fastbin chunk detected");
           /* each chunk claims to be inuse */
           do_check_inuse_chunk (av, p);
           total += chunksize (p);
           /* chunk belongs in this bin */
           assert (fastbin_index (chunksize (p)) == i);
-          p = p->fd;
+          p = REVEAL_PTR(&p->fd, p->fd, mchunkptr);
         }
     }

@@ -2923,7 +2934,7 @@ tcache_put (mchunkptr chunk, size_t tc_idx)
      detect a double free.  */
   e->key = tcache;

-  e->next = tcache->entries[tc_idx];
+  e->next = PROTECT_PTR(&e->next, tcache->entries[tc_idx], tcache_entry *);
   tcache->entries[tc_idx] = e;
   ++(tcache->counts[tc_idx]);
 }
@@ -2934,9 +2945,11 @@ static __always_inline void *
 tcache_get (size_t tc_idx)
 {
   tcache_entry *e = tcache->entries[tc_idx];
-  tcache->entries[tc_idx] = e->next;
+  tcache->entries[tc_idx] = REVEAL_PTR(&e->next, e->next, tcache_entry *);
   --(tcache->counts[tc_idx]);
   e->key = NULL;
+  if (__glibc_unlikely(!aligned_OK(e)))
+    malloc_printerr ("malloc(): un-aligned tcache chunk detected");
   return (void *) e;
 }

@@ -2960,7 +2973,9 @@ tcache_thread_shutdown (void)
       while (tcache_tmp->entries[i])
  {
    tcache_entry *e = tcache_tmp->entries[i];
-   tcache_tmp->entries[i] = e->next;
+      if (__glibc_unlikely(!aligned_OK(e)))
+        malloc_printerr ("tcache_thread_shutdown(): un-aligned tcache
chunk detected");
+   tcache_tmp->entries[i] = REVEAL_PTR(&e->next, e->next, tcache_entry *);
    __libc_free (e);
  }
     }
@@ -3570,8 +3585,11 @@ _int_malloc (mstate av, size_t bytes)
       victim = pp; \
       if (victim == NULL) \
  break; \
+      pp = REVEAL_PTR(&victim->fd, victim->fd, mfastbinptr);             \
+      if (__glibc_unlikely(!aligned_OK(pp)))                             \
+        malloc_printerr ("malloc(): un-aligned fastbin chunk detected"); \
     } \
-  while ((pp = catomic_compare_and_exchange_val_acq (fb, victim->fd, victim)) \
+  while ((pp = catomic_compare_and_exchange_val_acq (fb, pp, victim)) \
  != victim); \

   if ((unsigned long) (nb) <= (unsigned long) (get_max_fast ()))
@@ -3583,8 +3601,11 @@ _int_malloc (mstate av, size_t bytes)

       if (victim != NULL)
  {
+      if (__glibc_unlikely(!aligned_OK(victim)))
+        malloc_printerr ("malloc(): un-aligned fastbin chunk detected");
+
    if (SINGLE_THREAD_P)
-     *fb = victim->fd;
+     *fb = REVEAL_PTR(&victim->fd, victim->fd, mfastbinptr);
    else
      REMOVE_FB (fb, pp, victim);
    if (__glibc_likely (victim != NULL))
@@ -3605,8 +3626,10 @@ _int_malloc (mstate av, size_t bytes)
    while (tcache->counts[tc_idx] < mp_.tcache_count
  && (tc_victim = *fb) != NULL)
      {
+       if (__glibc_unlikely(!aligned_OK(tc_victim)))
+         malloc_printerr ("malloc(): un-aligned fastbin chunk detected");
        if (SINGLE_THREAD_P)
- *fb = tc_victim->fd;
+ *fb = REVEAL_PTR(&tc_victim->fd, tc_victim->fd, mfastbinptr);
        else
  {
    REMOVE_FB (fb, pp, tc_victim);
@@ -4196,11 +4219,15 @@ _int_free (mstate av, mchunkptr p, int have_lock)
      LIBC_PROBE (memory_tcache_double_free, 2, e, tc_idx);
      for (tmp = tcache->entries[tc_idx];
  tmp;
- tmp = tmp->next)
+ tmp = REVEAL_PTR(&tmp->next, tmp->next, tcache_entry *))
+        {
+       if (__glibc_unlikely(!aligned_OK(tmp)))
+ malloc_printerr ("free(): un-aligned chunk detected in tcache 2");
        if (tmp == e)
  malloc_printerr ("free(): double free detected in tcache 2");
      /* If we get here, it was a coincidence.  We've wasted a
         few cycles, but don't abort.  */
+        }
    }

  if (tcache->counts[tc_idx] < mp_.tcache_count)
@@ -4264,7 +4291,7 @@ _int_free (mstate av, mchunkptr p, int have_lock)
     add (i.e., double free).  */
  if (__builtin_expect (old == p, 0))
    malloc_printerr ("double free or corruption (fasttop)");
- p->fd = old;
+ p->fd = PROTECT_PTR(&p->fd, old, mchunkptr);
  *fb = p;
       }
     else
@@ -4274,7 +4301,8 @@ _int_free (mstate av, mchunkptr p, int have_lock)
       add (i.e., double free).  */
    if (__builtin_expect (old == p, 0))
      malloc_printerr ("double free or corruption (fasttop)");
-   p->fd = old2 = old;
+   old2 = old;
+   p->fd = PROTECT_PTR(&p->fd, old, mchunkptr);
  }
       while ((old = catomic_compare_and_exchange_val_rel (fb, p, old2))
       != old2);
@@ -4472,13 +4500,16 @@ static void malloc_consolidate(mstate av)
     if (p != 0) {
       do {
  {
+   if (__glibc_unlikely(!aligned_OK(p)))
+     malloc_printerr ("malloc_consolidate(): un-aligned fastbin chunk
detected");
+
    unsigned int idx = fastbin_index (chunksize (p));
    if ((&fastbin (av, idx)) != fb)
      malloc_printerr ("malloc_consolidate(): invalid chunk size");
  }

  check_inuse_chunk(av, p);
- nextp = p->fd;
+ nextp = REVEAL_PTR(&p->fd, p->fd, mchunkptr);

  /* Slightly streamlined version of consolidation code in free() */
  size = chunksize (p);
@@ -4896,8 +4927,10 @@ int_mallinfo (mstate av, struct mallinfo *m)

   for (i = 0; i < NFASTBINS; ++i)
     {
-      for (p = fastbin (av, i); p != 0; p = p->fd)
+      for (p = fastbin (av, i); p != 0; p = REVEAL_PTR(&p->fd, p->fd,
mchunkptr))
         {
+          if (__glibc_unlikely(!aligned_OK(p)))
+            malloc_printerr ("int_mallinfo(): un-aligned fastbin
chunk detected");
           ++nfastblocks;
           fastavail += chunksize (p);
         }
@@ -5437,8 +5470,10 @@ __malloc_info (int options, FILE *fp)

        while (p != NULL)
  {
+          if (__glibc_unlikely(!aligned_OK(p)))
+            malloc_printerr ("__malloc_info(): un-aligned fastbin
chunk detected");
    ++nthissize;
-   p = p->fd;
+   p = REVEAL_PTR(&p->fd, p->fd, mchunkptr);
  }

        fastavail += nthissize * thissize;
-- 
2.17.1

^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2020-10-31 15:28 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-19 13:32 [PATCH] Add Safe-Linking to fastbins and tcache Wilco Dijkstra
2020-03-19 15:09 ` Eyal Itkin
2020-03-19 18:15   ` Eyal Itkin
2020-03-20  0:10     ` Wilco Dijkstra
2020-03-20  2:49       ` Carlos O'Donell
2020-03-20  2:53     ` DJ Delorie
2020-03-20  9:35       ` Eyal Itkin
2020-03-20 18:00         ` Adhemerval Zanella
2020-03-20 19:45           ` Eyal Itkin
2020-03-20 23:58             ` DJ Delorie
2020-03-21  2:47             ` Carlos O'Donell
2020-03-23 13:52               ` Adhemerval Zanella
2020-03-23 16:11                 ` DJ Delorie
2020-03-23 16:23                   ` Carlos O'Donell
2020-03-23 16:26                   ` Adhemerval Zanella
2020-03-24 13:44                   ` Wilco Dijkstra
2020-03-29 17:04               ` Carlos O'Donell
2020-03-30  6:30                 ` Eyal Itkin
2020-03-30  7:56                 ` Andreas Schwab
2020-03-30  8:01             ` Andreas Schwab
2020-03-30  8:44               ` Eyal Itkin
2020-03-30 13:56                 ` Carlos O'Donell
2020-03-30 17:19                   ` Eyal Itkin
2020-10-31 13:33                   ` H.J. Lu
2020-10-31 14:49                     ` Eyal Itkin
2020-10-31 15:27                       ` H.J. Lu
  -- strict thread matches above, loose matches on Subject: below --
2020-02-02  9:43 Eyal Itkin
2020-02-03 18:10 ` Carlos O'Donell
2020-02-03 19:48   ` Eyal Itkin
2020-03-05  1:19     ` Eyal Itkin
2020-03-05  3:30       ` Carlos O'Donell
2020-03-18 21:28         ` Carlos O'Donell
2020-03-19  6:41           ` Eyal Itkin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).