public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
* Re: [PATCH] Add Safe-Linking to fastbins and tcache
@ 2020-03-19 13:32 Wilco Dijkstra
  2020-03-19 15:09 ` Eyal Itkin
  0 siblings, 1 reply; 33+ messages in thread
From: Wilco Dijkstra @ 2020-03-19 13:32 UTC (permalink / raw)
  To: 'GNU C Library', eyal.itkin

Hi,

I tried applying your patch - there seem to be issues with formatting:

@@ -327,6 +327,15 @@ __malloc_assert (const char *assertion, const
char *file, unsigned int line,
 # define MAX_TCACHE_COUNT UINT16_MAX

and

+#define PROTECT_PTR(pos, ptr, type)     ((type)((((size_t)pos) >>
PAGE_SHIFT) ^ ((size_t)ptr)))

It seems something cut off those lines as being too long...

+          if (__glibc_unlikely(!aligned_OK(p)))

There should be spaces before each '('. See the GLIBC coding style:
https://www.gnu.org/prep/standards/standards.html#Formatting

Did you run the GLIBC malloc benchmarks before/after this change?

Cheers,
Wilco

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-03-19 13:32 [PATCH] Add Safe-Linking to fastbins and tcache Wilco Dijkstra
@ 2020-03-19 15:09 ` Eyal Itkin
  2020-03-19 18:15   ` Eyal Itkin
  0 siblings, 1 reply; 33+ messages in thread
From: Eyal Itkin @ 2020-03-19 15:09 UTC (permalink / raw)
  To: Wilco Dijkstra; +Cc: GNU C Library

Hi,

Thanks for the feedback, I'll will work on it now and send an updated version.
I passed all of GLIBC's tests and also benchmakred the results in a
test I made. I wasn't aware of the fact that there is already a
benchmarking suite for malloc in GLIBC, and I will now try to use it.

In the assembly level, it adds 3/4 assembly instructions per malloc()
/ free(), and we had great benchmarking results for this feature in
all of the tests we made thus far so I am not worried about it, but I
will use your benchmarking suite as well and send you back the
results.

Thanks again,
Eyal.

On Thu, Mar 19, 2020 at 3:33 PM Wilco Dijkstra <Wilco.Dijkstra@arm.com> wrote:
>
> Hi,
>
> I tried applying your patch - there seem to be issues with formatting:
>
> @@ -327,6 +327,15 @@ __malloc_assert (const char *assertion, const
> char *file, unsigned int line,
>  # define MAX_TCACHE_COUNT UINT16_MAX
>
> and
>
> +#define PROTECT_PTR(pos, ptr, type)     ((type)((((size_t)pos) >>
> PAGE_SHIFT) ^ ((size_t)ptr)))
>
> It seems something cut off those lines as being too long...
>
> +          if (__glibc_unlikely(!aligned_OK(p)))
>
> There should be spaces before each '('. See the GLIBC coding style:
> https://www.gnu.org/prep/standards/standards.html#Formatting
>
> Did you run the GLIBC malloc benchmarks before/after this change?
>
> Cheers,
> Wilco

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-03-19 15:09 ` Eyal Itkin
@ 2020-03-19 18:15   ` Eyal Itkin
  2020-03-20  0:10     ` Wilco Dijkstra
  2020-03-20  2:53     ` DJ Delorie
  0 siblings, 2 replies; 33+ messages in thread
From: Eyal Itkin @ 2020-03-19 18:15 UTC (permalink / raw)
  To: Wilco Dijkstra; +Cc: GNU C Library

[-- Attachment #1: Type: text/plain, Size: 3655 bytes --]

Hi,

I went over the patch and fixed it to conform to the coding standards.
I'm also added it as an attachment, hoping that this way lines won't
break during the transmission of the email.

Regarding the benchmarks, I've just ran them on a simple GCP instance
(1 vCPU) before and after the patch. As you can see below, the change
isn't measurable in the "malloc-simple" tests. Sometimes the vanilla
version is faster (as it should be), but sometimes the patched version
is faster, which usually means that the change is lower than the noise
level on the testing server.

malloc-simple-128 (before patch):
{
 "timing_type": "hp_timing",
 "functions": {
  "malloc": {
   "": {
    "malloc_block_size": 128,
    "max_rss": 1676,
    "main_arena_st_allocs_0025_time": 77.511,
    "main_arena_st_allocs_0100_time": 94.4398,
    "main_arena_st_allocs_0400_time": 93.528,
    "main_arena_st_allocs_1600_time": 161.315,
    "main_arena_mt_allocs_0025_time": 117.634,
    "main_arena_mt_allocs_0100_time": 144.564,
    "main_arena_mt_allocs_0400_time": 159.591,
    "main_arena_mt_allocs_1600_time": 230.882,
    "thread_arena__allocs_0025_time": 120.277,
    "thread_arena__allocs_0100_time": 146.871,
    "thread_arena__allocs_0400_time": 157.205,
    "thread_arena__allocs_1600_time": 223.46
   }
  }
 }
}

malloc-simple-128 (with the patch):
{
 "timing_type": "hp_timing",
 "functions": {
  "malloc": {
   "": {
    "malloc_block_size": 128,
    "max_rss": 1672,
    "main_arena_st_allocs_0025_time": 81.6073,
    "main_arena_st_allocs_0100_time": 94.0351,
    "main_arena_st_allocs_0400_time": 97.1118,
    "main_arena_st_allocs_1600_time": 159.065,
    "main_arena_mt_allocs_0025_time": 123.422,
    "main_arena_mt_allocs_0100_time": 146.074,
    "main_arena_mt_allocs_0400_time": 151.536,
    "main_arena_mt_allocs_1600_time": 215.798,
    "thread_arena__allocs_0025_time": 116.577,
    "thread_arena__allocs_0100_time": 142.024,
    "thread_arena__allocs_0400_time": 158.38,
    "thread_arena__allocs_1600_time": 233.216
   }
  }
 }
}

Will be more than happy to help if there are more questions or
additional fixes needed.
Thanks for your time,
Eyal.

On Thu, Mar 19, 2020 at 5:09 PM Eyal Itkin <eyal.itkin@gmail.com> wrote:
>
> Hi,
>
> Thanks for the feedback, I'll will work on it now and send an updated version.
> I passed all of GLIBC's tests and also benchmakred the results in a
> test I made. I wasn't aware of the fact that there is already a
> benchmarking suite for malloc in GLIBC, and I will now try to use it.
>
> In the assembly level, it adds 3/4 assembly instructions per malloc()
> / free(), and we had great benchmarking results for this feature in
> all of the tests we made thus far so I am not worried about it, but I
> will use your benchmarking suite as well and send you back the
> results.
>
> Thanks again,
> Eyal.
>
> On Thu, Mar 19, 2020 at 3:33 PM Wilco Dijkstra <Wilco.Dijkstra@arm.com> wrote:
> >
> > Hi,
> >
> > I tried applying your patch - there seem to be issues with formatting:
> >
> > @@ -327,6 +327,15 @@ __malloc_assert (const char *assertion, const
> > char *file, unsigned int line,
> >  # define MAX_TCACHE_COUNT UINT16_MAX
> >
> > and
> >
> > +#define PROTECT_PTR(pos, ptr, type)     ((type)((((size_t)pos) >>
> > PAGE_SHIFT) ^ ((size_t)ptr)))
> >
> > It seems something cut off those lines as being too long...
> >
> > +          if (__glibc_unlikely(!aligned_OK(p)))
> >
> > There should be spaces before each '('. See the GLIBC coding style:
> > https://www.gnu.org/prep/standards/standards.html#Formatting
> >
> > Did you run the GLIBC malloc benchmarks before/after this change?
> >
> > Cheers,
> > Wilco

[-- Attachment #2: 0001-Add-Safe-Linking-to-fastbins-and-tcache.patch --]
[-- Type: application/octet-stream, Size: 9117 bytes --]

From efd94f8d72e903fa1638ff2a28aa23f14c4b0f0d Mon Sep 17 00:00:00 2001
From: Eyal Itkin <eyalit@checkpoint.com>
Date: Thu, 19 Mar 2020 18:06:07 +0200
Subject: [PATCH] Add Safe-Linking to fastbins and tcache

Safe-Linking is a security mechanism that protects single-linked
lists (such as the fastbin and tcache) from being tampered by attackers.
The mechanism makes use of randomness from ASLR (mmap_base), and when
combined with chunk alignment integrity checks, it protects the
pointers from being hijacked by an attacker.

While Safe-Unlinking protects double-linked lists (such as the small
bins), there wasn't any similar protection for attacks against
single-linked lists. This solution protects against 3 common attacks:
  * Partial pointer override: modifies the lower bytes (Little Endian)
  * Full pointer override: hijacks the pointer to an attacker's location
  * Unaligned chunks: pointing the list to an unaligned address

The design assumes an attacker doesn't know where the heap is located,
and uses the ASLR randomness to "sign" the single-linked pointers. We
mark the pointer as P and the location in which it is stored as L, and
the calculation will be:
  * PROTECT(P) := (L >> PAGE_SHIFT) XOR (P)
  * *L = PROTECT(P)

This way, the random bits from the address L (which start at the bit
in the PAGE_SHIFT position), will be merged with LSB of the stored
protected pointer. This protection layer prevents an attacker from
modifying the pointer into a controlled value.

An additional check that the chunks are MALLOC_ALIGNed adds an
important layer:
  * Attackers can't point to illegal (unaligned) memory addresses
  * Attackers must guess correctly the alignment bits

On standard 32 bit Linux machines, an attack will directly fail 7
out of 8 times, and on 64 bit machines it will fail 15 out of 16
times.

This proposed patch was benchmarked and it's effect on the overall
performance of the heap was negligible and couldn't be distinguished
from the default variance between tests on the vanilla version. A
similar protection was added to Chromium's version of TCMalloc
in 2012, and according to their documentation it had an overhead of
less than 2%.
---
 malloc/malloc.c | 64 +++++++++++++++++++++++++++++++++++++++----------
 1 file changed, 51 insertions(+), 13 deletions(-)

diff --git a/malloc/malloc.c b/malloc/malloc.c
index 7d7d30bb13..5dbb8e7487 100644
--- a/malloc/malloc.c
+++ b/malloc/malloc.c
@@ -327,6 +327,16 @@ __malloc_assert (const char *assertion, const char *file, unsigned int line,
 # define MAX_TCACHE_COUNT UINT16_MAX
 #endif
 
+/*
+  Safe-Linking:
+  Use randomness from ASLR (mmap_base) to protect single-linked lists
+  of Fast-Bins and TCache. Together with allocation alignment checks, this
+  mechanism reduces the risk of pointer hijacking, as was done with
+  Safe-Unlinking in the double-linked lists of Small-Bins.
+*/
+#define PROTECT_PTR(pos, ptr, type) \
+  ((type)((((size_t)pos) >> PAGE_SHIFT) ^ ((size_t)ptr)))
+#define REVEAL_PTR(pos, ptr, type)  PROTECT_PTR (pos, ptr, type)
 
 /*
   REALLOC_ZERO_BYTES_FREES should be set if a call to
@@ -2157,12 +2167,14 @@ do_check_malloc_state (mstate av)
 
       while (p != 0)
         {
+          if (__glibc_unlikely (!aligned_OK (p)))
+            malloc_printerr ("do_check_malloc_state(): un-aligned fastbin chunk detected");
           /* each chunk claims to be inuse */
           do_check_inuse_chunk (av, p);
           total += chunksize (p);
           /* chunk belongs in this bin */
           assert (fastbin_index (chunksize (p)) == i);
-          p = p->fd;
+          p = REVEAL_PTR (&p->fd, p->fd, mchunkptr);
         }
     }
 
@@ -2923,7 +2935,7 @@ tcache_put (mchunkptr chunk, size_t tc_idx)
      detect a double free.  */
   e->key = tcache;
 
-  e->next = tcache->entries[tc_idx];
+  e->next = PROTECT_PTR (&e->next, tcache->entries[tc_idx], tcache_entry *);
   tcache->entries[tc_idx] = e;
   ++(tcache->counts[tc_idx]);
 }
@@ -2934,9 +2946,11 @@ static __always_inline void *
 tcache_get (size_t tc_idx)
 {
   tcache_entry *e = tcache->entries[tc_idx];
-  tcache->entries[tc_idx] = e->next;
+  tcache->entries[tc_idx] = REVEAL_PTR (&e->next, e->next, tcache_entry *);
   --(tcache->counts[tc_idx]);
   e->key = NULL;
+  if (__glibc_unlikely (!aligned_OK (e)))
+    malloc_printerr ("malloc(): un-aligned tcache chunk detected");
   return (void *) e;
 }
 
@@ -2960,7 +2974,9 @@ tcache_thread_shutdown (void)
       while (tcache_tmp->entries[i])
 	{
 	  tcache_entry *e = tcache_tmp->entries[i];
-	  tcache_tmp->entries[i] = e->next;
+      if (__glibc_unlikely (!aligned_OK (e)))
+        malloc_printerr ("tcache_thread_shutdown(): un-aligned tcache chunk detected");
+	  tcache_tmp->entries[i] = REVEAL_PTR (&e->next, e->next, tcache_entry *);
 	  __libc_free (e);
 	}
     }
@@ -3570,8 +3586,11 @@ _int_malloc (mstate av, size_t bytes)
       victim = pp;					\
       if (victim == NULL)				\
 	break;						\
+      pp = REVEAL_PTR (&victim->fd, victim->fd, mfastbinptr);            \
+      if (__glibc_unlikely (!aligned_OK (pp)))                           \
+        malloc_printerr ("malloc(): un-aligned fastbin chunk detected"); \
     }							\
-  while ((pp = catomic_compare_and_exchange_val_acq (fb, victim->fd, victim)) \
+  while ((pp = catomic_compare_and_exchange_val_acq (fb, pp, victim)) \
 	 != victim);					\
 
   if ((unsigned long) (nb) <= (unsigned long) (get_max_fast ()))
@@ -3583,8 +3602,11 @@ _int_malloc (mstate av, size_t bytes)
 
       if (victim != NULL)
 	{
+	  if (__glibc_unlikely (!aligned_OK (victim)))
+	    malloc_printerr ("malloc(): un-aligned fastbin chunk detected");
+
 	  if (SINGLE_THREAD_P)
-	    *fb = victim->fd;
+	    *fb = REVEAL_PTR (&victim->fd, victim->fd, mfastbinptr);
 	  else
 	    REMOVE_FB (fb, pp, victim);
 	  if (__glibc_likely (victim != NULL))
@@ -3605,8 +3627,10 @@ _int_malloc (mstate av, size_t bytes)
 		  while (tcache->counts[tc_idx] < mp_.tcache_count
 			 && (tc_victim = *fb) != NULL)
 		    {
+		      if (__glibc_unlikely (!aligned_OK (tc_victim)))
+			malloc_printerr ("malloc(): un-aligned fastbin chunk detected");
 		      if (SINGLE_THREAD_P)
-			*fb = tc_victim->fd;
+			*fb = REVEAL_PTR (&tc_victim->fd, tc_victim->fd, mfastbinptr);
 		      else
 			{
 			  REMOVE_FB (fb, pp, tc_victim);
@@ -4196,11 +4220,15 @@ _int_free (mstate av, mchunkptr p, int have_lock)
 	    LIBC_PROBE (memory_tcache_double_free, 2, e, tc_idx);
 	    for (tmp = tcache->entries[tc_idx];
 		 tmp;
-		 tmp = tmp->next)
+		 tmp = REVEAL_PTR (&tmp->next, tmp->next, tcache_entry *))
+        {
+	      if (__glibc_unlikely (!aligned_OK (tmp)))
+		malloc_printerr ("free(): un-aligned chunk detected in tcache 2");
 	      if (tmp == e)
 		malloc_printerr ("free(): double free detected in tcache 2");
 	    /* If we get here, it was a coincidence.  We've wasted a
 	       few cycles, but don't abort.  */
+        }
 	  }
 
 	if (tcache->counts[tc_idx] < mp_.tcache_count)
@@ -4264,7 +4292,7 @@ _int_free (mstate av, mchunkptr p, int have_lock)
 	   add (i.e., double free).  */
 	if (__builtin_expect (old == p, 0))
 	  malloc_printerr ("double free or corruption (fasttop)");
-	p->fd = old;
+	p->fd = PROTECT_PTR (&p->fd, old, mchunkptr);
 	*fb = p;
       }
     else
@@ -4274,7 +4302,8 @@ _int_free (mstate av, mchunkptr p, int have_lock)
 	     add (i.e., double free).  */
 	  if (__builtin_expect (old == p, 0))
 	    malloc_printerr ("double free or corruption (fasttop)");
-	  p->fd = old2 = old;
+	  old2 = old;
+	  p->fd = PROTECT_PTR (&p->fd, old, mchunkptr);
 	}
       while ((old = catomic_compare_and_exchange_val_rel (fb, p, old2))
 	     != old2);
@@ -4472,13 +4501,16 @@ static void malloc_consolidate(mstate av)
     if (p != 0) {
       do {
 	{
+	  if (__glibc_unlikely (!aligned_OK (p)))
+	    malloc_printerr ("malloc_consolidate(): un-aligned fastbin chunk detected");
+
 	  unsigned int idx = fastbin_index (chunksize (p));
 	  if ((&fastbin (av, idx)) != fb)
 	    malloc_printerr ("malloc_consolidate(): invalid chunk size");
 	}
 
 	check_inuse_chunk(av, p);
-	nextp = p->fd;
+	nextp = REVEAL_PTR (&p->fd, p->fd, mchunkptr);
 
 	/* Slightly streamlined version of consolidation code in free() */
 	size = chunksize (p);
@@ -4896,8 +4928,12 @@ int_mallinfo (mstate av, struct mallinfo *m)
 
   for (i = 0; i < NFASTBINS; ++i)
     {
-      for (p = fastbin (av, i); p != 0; p = p->fd)
+      for (p = fastbin (av, i);
+           p != 0;
+           p = REVEAL_PTR (&p->fd, p->fd, mchunkptr))
         {
+          if (__glibc_unlikely (!aligned_OK (p)))
+            malloc_printerr ("int_mallinfo(): un-aligned fastbin chunk detected");
           ++nfastblocks;
           fastavail += chunksize (p);
         }
@@ -5437,8 +5473,10 @@ __malloc_info (int options, FILE *fp)
 
 	      while (p != NULL)
 		{
+          if (__glibc_unlikely (!aligned_OK (p)))
+            malloc_printerr ("__malloc_info(): un-aligned fastbin chunk detected");
 		  ++nthissize;
-		  p = p->fd;
+		  p = REVEAL_PTR (&p->fd, p->fd, mchunkptr);
 		}
 
 	      fastavail += nthissize * thissize;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-03-19 18:15   ` Eyal Itkin
@ 2020-03-20  0:10     ` Wilco Dijkstra
  2020-03-20  2:49       ` Carlos O'Donell
  2020-03-20  2:53     ` DJ Delorie
  1 sibling, 1 reply; 33+ messages in thread
From: Wilco Dijkstra @ 2020-03-20  0:10 UTC (permalink / raw)
  To: Eyal Itkin; +Cc: GNU C Library

Hi Eyal,

> I went over the patch and fixed it to conform to the coding standards.
> I'm also added it as an attachment, hoping that this way lines won't
> break during the transmission of the email.

Thanks, it now applied fine. There are still a few long lines over 80 chars:

+          if (__glibc_unlikely (!aligned_OK (p)))
+            malloc_printerr ("do_check_malloc_state(): un-aligned fastbin chunk detected");

There is a build issue in that PAGE_SHIFT is not defined. I'm not sure whether
this should come from a Linux header, but it only seems to be set by a few
targets with no generic default defined.

> Regarding the benchmarks, I've just ran them on a simple GCP instance
> (1 vCPU) before and after the patch. As you can see below, the change
> isn't measurable in the "malloc-simple" tests. Sometimes the vanilla
> version is faster (as it should be), but sometimes the patched version
> is faster, which usually means that the change is lower than the noise
> level on the testing server.

Yes, ideally you need an unshared machine with turbo turned off if.
I tried it on a Neoverse N1 and the results look pretty much identical.
The perf profile shows it doesn't affect the hottest paths and only adds a
few extra instructions. So from a performance viewpoint it looks good.

Cheers,
Wilco

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-03-20  0:10     ` Wilco Dijkstra
@ 2020-03-20  2:49       ` Carlos O'Donell
  0 siblings, 0 replies; 33+ messages in thread
From: Carlos O'Donell @ 2020-03-20  2:49 UTC (permalink / raw)
  To: Wilco Dijkstra, Eyal Itkin; +Cc: GNU C Library

On 3/19/20 8:10 PM, Wilco Dijkstra wrote:
> Hi Eyal,
> 
>> I went over the patch and fixed it to conform to the coding standards.
>> I'm also added it as an attachment, hoping that this way lines won't
>> break during the transmission of the email.
> 
> Thanks, it now applied fine. There are still a few long lines over 80 chars:
> 
> +          if (__glibc_unlikely (!aligned_OK (p)))
> +            malloc_printerr ("do_check_malloc_state(): un-aligned fastbin chunk detected");
> 
> There is a build issue in that PAGE_SHIFT is not defined. I'm not sure whether
> this should come from a Linux header, but it only seems to be set by a few
> targets with no generic default defined.
> 
>> Regarding the benchmarks, I've just ran them on a simple GCP instance
>> (1 vCPU) before and after the patch. As you can see below, the change
>> isn't measurable in the "malloc-simple" tests. Sometimes the vanilla
>> version is faster (as it should be), but sometimes the patched version
>> is faster, which usually means that the change is lower than the noise
>> level on the testing server.
> 
> Yes, ideally you need an unshared machine with turbo turned off if.
> I tried it on a Neoverse N1 and the results look pretty much identical.
> The perf profile shows it doesn't affect the hottest paths and only adds a
> few extra instructions. So from a performance viewpoint it looks good.

Thanks, that's good to know, and would have been an upfront blocker to
accepting the patches.

-- 
Cheers,
Carlos.


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-03-19 18:15   ` Eyal Itkin
  2020-03-20  0:10     ` Wilco Dijkstra
@ 2020-03-20  2:53     ` DJ Delorie
  2020-03-20  9:35       ` Eyal Itkin
  1 sibling, 1 reply; 33+ messages in thread
From: DJ Delorie @ 2020-03-20  2:53 UTC (permalink / raw)
  To: Eyal Itkin; +Cc: Wilco.Dijkstra, libc-alpha

Eyal Itkin <eyal.itkin@gmail.com> writes:
> Safe-Linking is a security mechanism that protects single-linked
> lists (such as the fastbin and tcache) from being tampered by attackers.
> The mechanism makes use of randomness from ASLR (mmap_base), and when
> combined with chunk alignment integrity checks, it protects the
> pointers from being hijacked by an attacker.

Based on the patch, it seems that protected pointers are only used in
chunks, not in the heap-global structures (i.e. the fastbins themselves
aren't protected, only the "next" pointer in each chunk).  If this is
so, could you add such a note to the comment before PROTECT_PTR ?

>   * PROTECT(P) := (L >> PAGE_SHIFT) XOR (P)
>   * *L = PROTECT(P)

I.e. any stored pointer's value is XOR'd with the pointer's address
bits.

> +#define PROTECT_PTR(pos, ptr, type) \
> +  ((type)((((size_t)pos) >> PAGE_SHIFT) ^ ((size_t)ptr)))
> +#define REVEAL_PTR(pos, ptr, type)  PROTECT_PTR (pos, ptr, type)

Style: whitespace after casts

Bug: PAGE_SHIFT is an obsolete macro these days.  Not sure what to use
but I wouldn't be opposed to just putting "12" in there.

Suggestion1: since the "type" can be determined using __typeof, there's
no need to pass it explicitly.  I.e.

#define PROTECT_PTR(pos, ptr) \
  ((__typeof ptr)((((size_t)pos) >> PAGE_SHIFT) ^ ((size_t)ptr)))

Suggestion2: REVEAL_PTR is always called with "&foo,foo" so it can be
further reduced:

#define REVEAL_PTR(ptr)  PROTECT_PTR (&ptr, ptr)


The rest of the patch looks good to me, but I think the above changes
will make the new code much more readable.


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-03-20  2:53     ` DJ Delorie
@ 2020-03-20  9:35       ` Eyal Itkin
  2020-03-20 18:00         ` Adhemerval Zanella
  0 siblings, 1 reply; 33+ messages in thread
From: Eyal Itkin @ 2020-03-20  9:35 UTC (permalink / raw)
  To: DJ Delorie; +Cc: Wilco Dijkstra, GNU C Library

[-- Attachment #1: Type: text/plain, Size: 2560 bytes --]

Thanks for all of the feedback.

1. I handled lines longer than 80 characters, there shouldn't be any
of those now.
2. Took your suggestion and removed PAGE_SHIFT in favor of 12. It
would have been cleaner if there was a proper POSIX-defined static way
of inferring this size. Anyway, 12 is the most common value, and in
some cases it is 13 or 14 which only means we will lose 1-2 bits of
protection, which isn't that bad.
3. I implemented all of the coding convention suggestions.
4. I clarified the comment to specifically mention that the mechanism
protects the "next" pointers of the single-linked lists of the
Fast-Bin and TCache. Also updated the commit message accordingly.

The patch indeed looks cleaner now (attached), feel free to add more
comments/suggestions if needed.

Thanks again for your help with this patch.
Eyal.

On Fri, Mar 20, 2020 at 4:53 AM DJ Delorie <dj@redhat.com> wrote:
>
> Eyal Itkin <eyal.itkin@gmail.com> writes:
> > Safe-Linking is a security mechanism that protects single-linked
> > lists (such as the fastbin and tcache) from being tampered by attackers.
> > The mechanism makes use of randomness from ASLR (mmap_base), and when
> > combined with chunk alignment integrity checks, it protects the
> > pointers from being hijacked by an attacker.
>
> Based on the patch, it seems that protected pointers are only used in
> chunks, not in the heap-global structures (i.e. the fastbins themselves
> aren't protected, only the "next" pointer in each chunk).  If this is
> so, could you add such a note to the comment before PROTECT_PTR ?
>
> >   * PROTECT(P) := (L >> PAGE_SHIFT) XOR (P)
> >   * *L = PROTECT(P)
>
> I.e. any stored pointer's value is XOR'd with the pointer's address
> bits.
>
> > +#define PROTECT_PTR(pos, ptr, type) \
> > +  ((type)((((size_t)pos) >> PAGE_SHIFT) ^ ((size_t)ptr)))
> > +#define REVEAL_PTR(pos, ptr, type)  PROTECT_PTR (pos, ptr, type)
>
> Style: whitespace after casts
>
> Bug: PAGE_SHIFT is an obsolete macro these days.  Not sure what to use
> but I wouldn't be opposed to just putting "12" in there.
>
> Suggestion1: since the "type" can be determined using __typeof, there's
> no need to pass it explicitly.  I.e.
>
> #define PROTECT_PTR(pos, ptr) \
>   ((__typeof ptr)((((size_t)pos) >> PAGE_SHIFT) ^ ((size_t)ptr)))
>
> Suggestion2: REVEAL_PTR is always called with "&foo,foo" so it can be
> further reduced:
>
> #define REVEAL_PTR(ptr)  PROTECT_PTR (&ptr, ptr)
>
>
> The rest of the patch looks good to me, but I think the above changes
> will make the new code much more readable.
>

[-- Attachment #2: 0001-Add-Safe-Linking-to-fastbins-and-tcache.patch --]
[-- Type: application/octet-stream, Size: 9283 bytes --]

From a691f4c1bf8da83c5150b792b5715bb744d42a56 Mon Sep 17 00:00:00 2001
From: Eyal Itkin <eyalit@checkpoint.com>
Date: Fri, 20 Mar 2020 11:15:25 +0200
Subject: [PATCH] Add Safe-Linking to fastbins and tcache

Safe-Linking is a security mechanism that protects single-linked
lists (such as the fastbin and tcache) from being tampered by attackers.
The mechanism makes use of randomness from ASLR (mmap_base), and when
combined with chunk alignment integrity checks, it protects the "next"
pointers from being hijacked by an attacker.

While Safe-Unlinking protects double-linked lists (such as the small
bins), there wasn't any similar protection for attacks against
single-linked lists. This solution protects against 3 common attacks:
  * Partial pointer override: modifies the lower bytes (Little Endian)
  * Full pointer override: hijacks the pointer to an attacker's location
  * Unaligned chunks: pointing the list to an unaligned address

The design assumes an attacker doesn't know where the heap is located,
and uses the ASLR randomness to "sign" the single-linked pointers. We
mark the pointer as P and the location in which it is stored as L, and
the calculation will be:
  * PROTECT(P) := (L >> PAGE_SHIFT) XOR (P)
  * *L = PROTECT(P)

This way, the random bits from the address L (which start at the bit
in the PAGE_SHIFT position), will be merged with LSB of the stored
protected pointer. This protection layer prevents an attacker from
modifying the pointer into a controlled value.

An additional check that the chunks are MALLOC_ALIGNed adds an
important layer:
  * Attackers can't point to illegal (unaligned) memory addresses
  * Attackers must guess correctly the alignment bits

On standard 32 bit Linux machines, an attack will directly fail 7
out of 8 times, and on 64 bit machines it will fail 15 out of 16
times.

This proposed patch was benchmarked and it's effect on the overall
performance of the heap was negligible and couldn't be distinguished
from the default variance between tests on the vanilla version. A
similar protection was added to Chromium's version of TCMalloc
in 2012, and according to their documentation it had an overhead of
less than 2%.
---
 malloc/malloc.c | 73 ++++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 60 insertions(+), 13 deletions(-)

diff --git a/malloc/malloc.c b/malloc/malloc.c
index 7d7d30bb13..39f35f8864 100644
--- a/malloc/malloc.c
+++ b/malloc/malloc.c
@@ -327,6 +327,20 @@ __malloc_assert (const char *assertion, const char *file, unsigned int line,
 # define MAX_TCACHE_COUNT UINT16_MAX
 #endif
 
+/*
+  Safe-Linking:
+  Use randomness from ASLR (mmap_base) to protect single-linked lists
+  of Fast-Bins and TCache. That is, mask the "next" pointers of the
+  lists' chunks, and also perform allocation alignment checks on them.
+  This mechanism reduces the risk of pointer hijacking, as was done with
+  Safe-Unlinking in the double-linked lists of Small-Bins.
+  Using ASLR_BASE_SHIFT since PAGE_SHIFT is obsolete, and I'm not sure there
+  is a proper static POSIX-defined way of deriving the PAGE_SIZE.
+*/
+#define ASLR_BASE_SHIFT  12
+#define PROTECT_PTR(pos, ptr) \
+  ((__typeof (ptr)) ((((size_t) pos) >> ASLR_BASE_SHIFT) ^ ((size_t) ptr)))
+#define REVEAL_PTR(ptr)  PROTECT_PTR (&ptr, ptr)
 
 /*
   REALLOC_ZERO_BYTES_FREES should be set if a call to
@@ -2157,12 +2171,15 @@ do_check_malloc_state (mstate av)
 
       while (p != 0)
         {
+          if (__glibc_unlikely (!aligned_OK (p)))
+            malloc_printerr ("do_check_malloc_state(): " \
+                             "un-aligned fastbin chunk detected");
           /* each chunk claims to be inuse */
           do_check_inuse_chunk (av, p);
           total += chunksize (p);
           /* chunk belongs in this bin */
           assert (fastbin_index (chunksize (p)) == i);
-          p = p->fd;
+          p = REVEAL_PTR (p->fd);
         }
     }
 
@@ -2923,7 +2940,7 @@ tcache_put (mchunkptr chunk, size_t tc_idx)
      detect a double free.  */
   e->key = tcache;
 
-  e->next = tcache->entries[tc_idx];
+  e->next = PROTECT_PTR (&e->next, tcache->entries[tc_idx]);
   tcache->entries[tc_idx] = e;
   ++(tcache->counts[tc_idx]);
 }
@@ -2934,9 +2951,11 @@ static __always_inline void *
 tcache_get (size_t tc_idx)
 {
   tcache_entry *e = tcache->entries[tc_idx];
-  tcache->entries[tc_idx] = e->next;
+  tcache->entries[tc_idx] = REVEAL_PTR (e->next);
   --(tcache->counts[tc_idx]);
   e->key = NULL;
+  if (__glibc_unlikely (!aligned_OK (e)))
+    malloc_printerr ("malloc(): un-aligned tcache chunk detected");
   return (void *) e;
 }
 
@@ -2960,7 +2979,10 @@ tcache_thread_shutdown (void)
       while (tcache_tmp->entries[i])
 	{
 	  tcache_entry *e = tcache_tmp->entries[i];
-	  tcache_tmp->entries[i] = e->next;
+      if (__glibc_unlikely (!aligned_OK (e)))
+        malloc_printerr ("tcache_thread_shutdown(): " \
+                         "un-aligned tcache chunk detected");
+	  tcache_tmp->entries[i] = REVEAL_PTR (e->next);
 	  __libc_free (e);
 	}
     }
@@ -3570,8 +3592,11 @@ _int_malloc (mstate av, size_t bytes)
       victim = pp;					\
       if (victim == NULL)				\
 	break;						\
+      pp = REVEAL_PTR (victim->fd);                                      \
+      if (__glibc_unlikely (!aligned_OK (pp)))                           \
+        malloc_printerr ("malloc(): un-aligned fastbin chunk detected"); \
     }							\
-  while ((pp = catomic_compare_and_exchange_val_acq (fb, victim->fd, victim)) \
+  while ((pp = catomic_compare_and_exchange_val_acq (fb, pp, victim)) \
 	 != victim);					\
 
   if ((unsigned long) (nb) <= (unsigned long) (get_max_fast ()))
@@ -3583,8 +3608,11 @@ _int_malloc (mstate av, size_t bytes)
 
       if (victim != NULL)
 	{
+	  if (__glibc_unlikely (!aligned_OK (victim)))
+	    malloc_printerr ("malloc(): un-aligned fastbin chunk detected");
+
 	  if (SINGLE_THREAD_P)
-	    *fb = victim->fd;
+	    *fb = REVEAL_PTR (victim->fd);
 	  else
 	    REMOVE_FB (fb, pp, victim);
 	  if (__glibc_likely (victim != NULL))
@@ -3605,8 +3633,10 @@ _int_malloc (mstate av, size_t bytes)
 		  while (tcache->counts[tc_idx] < mp_.tcache_count
 			 && (tc_victim = *fb) != NULL)
 		    {
+		      if (__glibc_unlikely (!aligned_OK (tc_victim)))
+			malloc_printerr ("malloc(): un-aligned fastbin chunk detected");
 		      if (SINGLE_THREAD_P)
-			*fb = tc_victim->fd;
+			*fb = REVEAL_PTR (tc_victim->fd);
 		      else
 			{
 			  REMOVE_FB (fb, pp, tc_victim);
@@ -4196,11 +4226,15 @@ _int_free (mstate av, mchunkptr p, int have_lock)
 	    LIBC_PROBE (memory_tcache_double_free, 2, e, tc_idx);
 	    for (tmp = tcache->entries[tc_idx];
 		 tmp;
-		 tmp = tmp->next)
+		 tmp = REVEAL_PTR (tmp->next))
+        {
+	      if (__glibc_unlikely (!aligned_OK (tmp)))
+		malloc_printerr ("free(): un-aligned chunk detected in tcache 2");
 	      if (tmp == e)
 		malloc_printerr ("free(): double free detected in tcache 2");
 	    /* If we get here, it was a coincidence.  We've wasted a
 	       few cycles, but don't abort.  */
+        }
 	  }
 
 	if (tcache->counts[tc_idx] < mp_.tcache_count)
@@ -4264,7 +4298,7 @@ _int_free (mstate av, mchunkptr p, int have_lock)
 	   add (i.e., double free).  */
 	if (__builtin_expect (old == p, 0))
 	  malloc_printerr ("double free or corruption (fasttop)");
-	p->fd = old;
+	p->fd = PROTECT_PTR (&p->fd, old);
 	*fb = p;
       }
     else
@@ -4274,7 +4308,8 @@ _int_free (mstate av, mchunkptr p, int have_lock)
 	     add (i.e., double free).  */
 	  if (__builtin_expect (old == p, 0))
 	    malloc_printerr ("double free or corruption (fasttop)");
-	  p->fd = old2 = old;
+	  old2 = old;
+	  p->fd = PROTECT_PTR (&p->fd, old);
 	}
       while ((old = catomic_compare_and_exchange_val_rel (fb, p, old2))
 	     != old2);
@@ -4472,13 +4507,17 @@ static void malloc_consolidate(mstate av)
     if (p != 0) {
       do {
 	{
+	  if (__glibc_unlikely (!aligned_OK (p)))
+	    malloc_printerr ("malloc_consolidate(): " \
+                         "un-aligned fastbin chunk detected");
+
 	  unsigned int idx = fastbin_index (chunksize (p));
 	  if ((&fastbin (av, idx)) != fb)
 	    malloc_printerr ("malloc_consolidate(): invalid chunk size");
 	}
 
 	check_inuse_chunk(av, p);
-	nextp = p->fd;
+	nextp = REVEAL_PTR (p->fd);
 
 	/* Slightly streamlined version of consolidation code in free() */
 	size = chunksize (p);
@@ -4896,8 +4935,13 @@ int_mallinfo (mstate av, struct mallinfo *m)
 
   for (i = 0; i < NFASTBINS; ++i)
     {
-      for (p = fastbin (av, i); p != 0; p = p->fd)
+      for (p = fastbin (av, i);
+           p != 0;
+           p = REVEAL_PTR (p->fd))
         {
+          if (__glibc_unlikely (!aligned_OK (p)))
+            malloc_printerr ("int_mallinfo(): " \
+                             "un-aligned fastbin chunk detected");
           ++nfastblocks;
           fastavail += chunksize (p);
         }
@@ -5437,8 +5481,11 @@ __malloc_info (int options, FILE *fp)
 
 	      while (p != NULL)
 		{
+          if (__glibc_unlikely (!aligned_OK (p)))
+            malloc_printerr ("__malloc_info(): " \
+                             "un-aligned fastbin chunk detected");
 		  ++nthissize;
-		  p = p->fd;
+		  p = REVEAL_PTR (p->fd);
 		}
 
 	      fastavail += nthissize * thissize;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-03-20  9:35       ` Eyal Itkin
@ 2020-03-20 18:00         ` Adhemerval Zanella
  2020-03-20 19:45           ` Eyal Itkin
  0 siblings, 1 reply; 33+ messages in thread
From: Adhemerval Zanella @ 2020-03-20 18:00 UTC (permalink / raw)
  To: libc-alpha, eyal.itkin



On 20/03/2020 06:35, Eyal Itkin via Libc-alpha wrote:
> Thanks for all of the feedback.
> 
> 1. I handled lines longer than 80 characters, there shouldn't be any
> of those now.
> 2. Took your suggestion and removed PAGE_SHIFT in favor of 12. It
> would have been cleaner if there was a proper POSIX-defined static way
> of inferring this size. Anyway, 12 is the most common value, and in
> some cases it is 13 or 14 which only means we will lose 1-2 bits of
> protection, which isn't that bad.
> 3. I implemented all of the coding convention suggestions.
> 4. I clarified the comment to specifically mention that the mechanism
> protects the "next" pointers of the single-linked lists of the
> Fast-Bin and TCache. Also updated the commit message accordingly.
> 
> The patch indeed looks cleaner now (attached), feel free to add more
> comments/suggestions if needed.
> 
> Thanks again for your help with this patch.
> Eyal.

Thanks for working on this, I would only ask if you could send next
iteration inline instead of as an attachment (it helps a lot reviewed
with most email readers and it is the default for tools like
git send-email).

> From a691f4c1bf8da83c5150b792b5715bb744d42a56 Mon Sep 17 00:00:00 2001
> From: Eyal Itkin <eyalit@checkpoint.com>
> Date: Fri, 20 Mar 2020 11:15:25 +0200
> Subject: [PATCH] Add Safe-Linking to fastbins and tcache
> 
> Safe-Linking is a security mechanism that protects single-linked
> lists (such as the fastbin and tcache) from being tampered by attackers.
> The mechanism makes use of randomness from ASLR (mmap_base), and when
> combined with chunk alignment integrity checks, it protects the "next"
> pointers from being hijacked by an attacker.
> 
> While Safe-Unlinking protects double-linked lists (such as the small
> bins), there wasn't any similar protection for attacks against
> single-linked lists. This solution protects against 3 common attacks:
>   * Partial pointer override: modifies the lower bytes (Little Endian)
>   * Full pointer override: hijacks the pointer to an attacker's location
>   * Unaligned chunks: pointing the list to an unaligned address
> 
> The design assumes an attacker doesn't know where the heap is located,
> and uses the ASLR randomness to "sign" the single-linked pointers. We
> mark the pointer as P and the location in which it is stored as L, and
> the calculation will be:
>   * PROTECT(P) := (L >> PAGE_SHIFT) XOR (P)
>   * *L = PROTECT(P)
> 
> This way, the random bits from the address L (which start at the bit
> in the PAGE_SHIFT position), will be merged with LSB of the stored
> protected pointer. This protection layer prevents an attacker from
> modifying the pointer into a controlled value.
> 
> An additional check that the chunks are MALLOC_ALIGNed adds an
> important layer:
>   * Attackers can't point to illegal (unaligned) memory addresses
>   * Attackers must guess correctly the alignment bits
> 
> On standard 32 bit Linux machines, an attack will directly fail 7
> out of 8 times, and on 64 bit machines it will fail 15 out of 16
> times.
> 
> This proposed patch was benchmarked and it's effect on the overall
> performance of the heap was negligible and couldn't be distinguished
> from the default variance between tests on the vanilla version. A
> similar protection was added to Chromium's version of TCMalloc
> in 2012, and according to their documentation it had an overhead of
> less than 2%.

The idea seems good although I have comments below.  It is unfortunate 
that for thread workloads, this mitigation is greatly reduced without 
the fix of arena entropy (BZ#22853). My idea is to see I can come up 
with a testcase for BZ#22853 and submit the patch on comment 5.

[1] https://sourceware.org/bugzilla/show_bug.cgi?id=22853

> ---
>  malloc/malloc.c | 73 ++++++++++++++++++++++++++++++++++++++++---------
>  1 file changed, 60 insertions(+), 13 deletions(-)
> 
> diff --git a/malloc/malloc.c b/malloc/malloc.c
> index 7d7d30bb13..39f35f8864 100644
> --- a/malloc/malloc.c
> +++ b/malloc/malloc.c
> @@ -327,6 +327,20 @@ __malloc_assert (const char *assertion, const char *file, unsigned int line,
>  # define MAX_TCACHE_COUNT UINT16_MAX
>  #endif
>  
> +/*
> +  Safe-Linking:
> +  Use randomness from ASLR (mmap_base) to protect single-linked lists
> +  of Fast-Bins and TCache. That is, mask the "next" pointers of the

Use double space after a period.

> +  lists' chunks, and also perform allocation alignment checks on them.
> +  This mechanism reduces the risk of pointer hijacking, as was done with
> +  Safe-Unlinking in the double-linked lists of Small-Bins.
> +  Using ASLR_BASE_SHIFT since PAGE_SHIFT is obsolete, and I'm not sure there
> +  is a proper static POSIX-defined way of deriving the PAGE_SIZE.
> +*/

Some systems (aarch64, powerpc64le, m68k, etc) may have different page
size depending of the kernel configuration, thus providing a PAGE_SHIFT
for such architecture does not make sense.  That's why it should not
mention it. I would suggest something like:

  It assumes a minimum page size of 4096 KB.  System with large pages
  provide less entropy, although the pointer mangling still works.

> +#define ASLR_BASE_SHIFT  12
> +#define PROTECT_PTR(pos, ptr) \
> +  ((__typeof (ptr)) ((((size_t) pos) >> ASLR_BASE_SHIFT) ^ ((size_t) ptr)))
> +#define REVEAL_PTR(ptr)  PROTECT_PTR (&ptr, ptr)

I think it would be better if we use proper static inline function
for newer code.

>  
>  /*
>    REALLOC_ZERO_BYTES_FREES should be set if a call to
> @@ -2157,12 +2171,15 @@ do_check_malloc_state (mstate av)
>  
>        while (p != 0)
>          {
> +          if (__glibc_unlikely (!aligned_OK (p)))
> +            malloc_printerr ("do_check_malloc_state(): " \
> +                             "un-aligned fastbin chunk detected");

'un-aligned' seemed wrong, should it be unaligned? 

>            /* each chunk claims to be inuse */
>            do_check_inuse_chunk (av, p);
>            total += chunksize (p);
>            /* chunk belongs in this bin */
>            assert (fastbin_index (chunksize (p)) == i);
> -          p = p->fd;
> +          p = REVEAL_PTR (p->fd);
>          }
>      }
>  
> @@ -2923,7 +2940,7 @@ tcache_put (mchunkptr chunk, size_t tc_idx)
>       detect a double free.  */
>    e->key = tcache;
>  
> -  e->next = tcache->entries[tc_idx];
> +  e->next = PROTECT_PTR (&e->next, tcache->entries[tc_idx]);
>    tcache->entries[tc_idx] = e;
>    ++(tcache->counts[tc_idx]);
>  }> @@ -2934,9 +2951,11 @@ static __always_inline void *
>  tcache_get (size_t tc_idx)
>  {
>    tcache_entry *e = tcache->entries[tc_idx];
> -  tcache->entries[tc_idx] = e->next;
> +  tcache->entries[tc_idx] = REVEAL_PTR (e->next);
>    --(tcache->counts[tc_idx]);
>    e->key = NULL;
> +  if (__glibc_unlikely (!aligned_OK (e)))
> +    malloc_printerr ("malloc(): un-aligned tcache chunk detected");
>    return (void *) e;
>  }
>  
> @@ -2960,7 +2979,10 @@ tcache_thread_shutdown (void)
>        while (tcache_tmp->entries[i])
>  	{
>  	  tcache_entry *e = tcache_tmp->entries[i];
> -	  tcache_tmp->entries[i] = e->next;
> +      if (__glibc_unlikely (!aligned_OK (e)))
> +        malloc_printerr ("tcache_thread_shutdown(): " \
> +                         "un-aligned tcache chunk detected");
> +	  tcache_tmp->entries[i] = REVEAL_PTR (e->next);
>  	  __libc_free (e);
>  	}
>      }
> @@ -3570,8 +3592,11 @@ _int_malloc (mstate av, size_t bytes)
>        victim = pp;					\
>        if (victim == NULL)				\
>  	break;						\
> +      pp = REVEAL_PTR (victim->fd);                                      \
> +      if (__glibc_unlikely (!aligned_OK (pp)))                           \
> +        malloc_printerr ("malloc(): un-aligned fastbin chunk detected"); \
>      }							\
> -  while ((pp = catomic_compare_and_exchange_val_acq (fb, victim->fd, victim)) \
> +  while ((pp = catomic_compare_and_exchange_val_acq (fb, pp, victim)) \
>  	 != victim);					\
>  
>    if ((unsigned long) (nb) <= (unsigned long) (get_max_fast ()))
> @@ -3583,8 +3608,11 @@ _int_malloc (mstate av, size_t bytes)
>  
>        if (victim != NULL)
>  	{
> +	  if (__glibc_unlikely (!aligned_OK (victim)))
> +	    malloc_printerr ("malloc(): un-aligned fastbin chunk detected");
> +
>  	  if (SINGLE_THREAD_P)
> -	    *fb = victim->fd;
> +	    *fb = REVEAL_PTR (victim->fd);
>  	  else
>  	    REMOVE_FB (fb, pp, victim);
>  	  if (__glibc_likely (victim != NULL))
> @@ -3605,8 +3633,10 @@ _int_malloc (mstate av, size_t bytes)
>  		  while (tcache->counts[tc_idx] < mp_.tcache_count
>  			 && (tc_victim = *fb) != NULL)
>  		    {
> +		      if (__glibc_unlikely (!aligned_OK (tc_victim)))
> +			malloc_printerr ("malloc(): un-aligned fastbin chunk detected");
>  		      if (SINGLE_THREAD_P)
> -			*fb = tc_victim->fd;
> +			*fb = REVEAL_PTR (tc_victim->fd);
>  		      else
>  			{
>  			  REMOVE_FB (fb, pp, tc_victim);
> @@ -4196,11 +4226,15 @@ _int_free (mstate av, mchunkptr p, int have_lock)
>  	    LIBC_PROBE (memory_tcache_double_free, 2, e, tc_idx);
>  	    for (tmp = tcache->entries[tc_idx];
>  		 tmp;
> -		 tmp = tmp->next)
> +		 tmp = REVEAL_PTR (tmp->next))
> +        {
> +	      if (__glibc_unlikely (!aligned_OK (tmp)))
> +		malloc_printerr ("free(): un-aligned chunk detected in tcache 2");
>  	      if (tmp == e)
>  		malloc_printerr ("free(): double free detected in tcache 2");
>  	    /* If we get here, it was a coincidence.  We've wasted a
>  	       few cycles, but don't abort.  */
> +        }
>  	  }
>  
>  	if (tcache->counts[tc_idx] < mp_.tcache_count)
> @@ -4264,7 +4298,7 @@ _int_free (mstate av, mchunkptr p, int have_lock)
>  	   add (i.e., double free).  */
>  	if (__builtin_expect (old == p, 0))
>  	  malloc_printerr ("double free or corruption (fasttop)");
> -	p->fd = old;
> +	p->fd = PROTECT_PTR (&p->fd, old);
>  	*fb = p;
>        }
>      else
> @@ -4274,7 +4308,8 @@ _int_free (mstate av, mchunkptr p, int have_lock)
>  	     add (i.e., double free).  */
>  	  if (__builtin_expect (old == p, 0))
>  	    malloc_printerr ("double free or corruption (fasttop)");
> -	  p->fd = old2 = old;
> +	  old2 = old;
> +	  p->fd = PROTECT_PTR (&p->fd, old);
>  	}
>        while ((old = catomic_compare_and_exchange_val_rel (fb, p, old2))
>  	     != old2);
> @@ -4472,13 +4507,17 @@ static void malloc_consolidate(mstate av)
>      if (p != 0) {
>        do {
>  	{
> +	  if (__glibc_unlikely (!aligned_OK (p)))
> +	    malloc_printerr ("malloc_consolidate(): " \
> +                         "un-aligned fastbin chunk detected");
> +
>  	  unsigned int idx = fastbin_index (chunksize (p));
>  	  if ((&fastbin (av, idx)) != fb)
>  	    malloc_printerr ("malloc_consolidate(): invalid chunk size");
>  	}
>  
>  	check_inuse_chunk(av, p);
> -	nextp = p->fd;
> +	nextp = REVEAL_PTR (p->fd);
>  
>  	/* Slightly streamlined version of consolidation code in free() */
>  	size = chunksize (p);
> @@ -4896,8 +4935,13 @@ int_mallinfo (mstate av, struct mallinfo *m)
>  
>    for (i = 0; i < NFASTBINS; ++i)
>      {
> -      for (p = fastbin (av, i); p != 0; p = p->fd)
> +      for (p = fastbin (av, i);
> +           p != 0;
> +           p = REVEAL_PTR (p->fd))
>          {
> +          if (__glibc_unlikely (!aligned_OK (p)))
> +            malloc_printerr ("int_mallinfo(): " \
> +                             "un-aligned fastbin chunk detected");
>            ++nfastblocks;
>            fastavail += chunksize (p);
>          }
> @@ -5437,8 +5481,11 @@ __malloc_info (int options, FILE *fp)
>  
>  	      while (p != NULL)
>  		{
> +          if (__glibc_unlikely (!aligned_OK (p)))
> +            malloc_printerr ("__malloc_info(): " \
> +                             "un-aligned fastbin chunk detected");
>  		  ++nthissize;
> -		  p = p->fd;
> +		  p = REVEAL_PTR (p->fd);
>  		}
>  
>  	      fastavail += nthissize * thissize;
> -- 
> 2.17.1

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-03-20 18:00         ` Adhemerval Zanella
@ 2020-03-20 19:45           ` Eyal Itkin
  2020-03-20 23:58             ` DJ Delorie
                               ` (2 more replies)
  0 siblings, 3 replies; 33+ messages in thread
From: Eyal Itkin @ 2020-03-20 19:45 UTC (permalink / raw)
  To: Adhemerval Zanella; +Cc: GNU C Library

[-- Attachment #1: Type: text/plain, Size: 14130 bytes --]

I really am sorry to send the patch as an attachment again, however I
had too many difficulties in using git's send-email and convincing him
to send it as a proper reply to this ongoing thread. I wish Gmail's
client won't auto-wrap the lines and break the patch, but sadly they
refuse to implement this feature for over a decade now.

Regarding the previous feedback, I implemented all of the comments,
aside from transforming the macros to static inline functions.

I did try to switch to static functions, but this meant that I had to
change back the signature of REVEAL_PTR to once again receive two
arguments for "pos" and "ptr", as using &ptr could no longer work.
Since this change would have contradicted the previous comment about
the readability of the code, I undid the change and stayed with macros
in sake of readability. Using macros also have the benefit of using
correctly typed objects instead of generic void* types in the api of
the functions.

Thanks again for your help with this.
Eyal.

On Fri, Mar 20, 2020 at 8:00 PM Adhemerval Zanella
<adhemerval.zanella@linaro.org> wrote:
>
>
>
> On 20/03/2020 06:35, Eyal Itkin via Libc-alpha wrote:
> > Thanks for all of the feedback.
> >
> > 1. I handled lines longer than 80 characters, there shouldn't be any
> > of those now.
> > 2. Took your suggestion and removed PAGE_SHIFT in favor of 12. It
> > would have been cleaner if there was a proper POSIX-defined static way
> > of inferring this size. Anyway, 12 is the most common value, and in
> > some cases it is 13 or 14 which only means we will lose 1-2 bits of
> > protection, which isn't that bad.
> > 3. I implemented all of the coding convention suggestions.
> > 4. I clarified the comment to specifically mention that the mechanism
> > protects the "next" pointers of the single-linked lists of the
> > Fast-Bin and TCache. Also updated the commit message accordingly.
> >
> > The patch indeed looks cleaner now (attached), feel free to add more
> > comments/suggestions if needed.
> >
> > Thanks again for your help with this patch.
> > Eyal.
>
> Thanks for working on this, I would only ask if you could send next
> iteration inline instead of as an attachment (it helps a lot reviewed
> with most email readers and it is the default for tools like
> git send-email).
>
> > From a691f4c1bf8da83c5150b792b5715bb744d42a56 Mon Sep 17 00:00:00 2001
> > From: Eyal Itkin <eyalit@checkpoint.com>
> > Date: Fri, 20 Mar 2020 11:15:25 +0200
> > Subject: [PATCH] Add Safe-Linking to fastbins and tcache
> >
> > Safe-Linking is a security mechanism that protects single-linked
> > lists (such as the fastbin and tcache) from being tampered by attackers.
> > The mechanism makes use of randomness from ASLR (mmap_base), and when
> > combined with chunk alignment integrity checks, it protects the "next"
> > pointers from being hijacked by an attacker.
> >
> > While Safe-Unlinking protects double-linked lists (such as the small
> > bins), there wasn't any similar protection for attacks against
> > single-linked lists. This solution protects against 3 common attacks:
> >   * Partial pointer override: modifies the lower bytes (Little Endian)
> >   * Full pointer override: hijacks the pointer to an attacker's location
> >   * Unaligned chunks: pointing the list to an unaligned address
> >
> > The design assumes an attacker doesn't know where the heap is located,
> > and uses the ASLR randomness to "sign" the single-linked pointers. We
> > mark the pointer as P and the location in which it is stored as L, and
> > the calculation will be:
> >   * PROTECT(P) := (L >> PAGE_SHIFT) XOR (P)
> >   * *L = PROTECT(P)
> >
> > This way, the random bits from the address L (which start at the bit
> > in the PAGE_SHIFT position), will be merged with LSB of the stored
> > protected pointer. This protection layer prevents an attacker from
> > modifying the pointer into a controlled value.
> >
> > An additional check that the chunks are MALLOC_ALIGNed adds an
> > important layer:
> >   * Attackers can't point to illegal (unaligned) memory addresses
> >   * Attackers must guess correctly the alignment bits
> >
> > On standard 32 bit Linux machines, an attack will directly fail 7
> > out of 8 times, and on 64 bit machines it will fail 15 out of 16
> > times.
> >
> > This proposed patch was benchmarked and it's effect on the overall
> > performance of the heap was negligible and couldn't be distinguished
> > from the default variance between tests on the vanilla version. A
> > similar protection was added to Chromium's version of TCMalloc
> > in 2012, and according to their documentation it had an overhead of
> > less than 2%.
>
> The idea seems good although I have comments below.  It is unfortunate
> that for thread workloads, this mitigation is greatly reduced without
> the fix of arena entropy (BZ#22853). My idea is to see I can come up
> with a testcase for BZ#22853 and submit the patch on comment 5.
>
> [1] https://sourceware.org/bugzilla/show_bug.cgi?id=22853
>
> > ---
> >  malloc/malloc.c | 73 ++++++++++++++++++++++++++++++++++++++++---------
> >  1 file changed, 60 insertions(+), 13 deletions(-)
> >
> > diff --git a/malloc/malloc.c b/malloc/malloc.c
> > index 7d7d30bb13..39f35f8864 100644
> > --- a/malloc/malloc.c
> > +++ b/malloc/malloc.c
> > @@ -327,6 +327,20 @@ __malloc_assert (const char *assertion, const char *file, unsigned int line,
> >  # define MAX_TCACHE_COUNT UINT16_MAX
> >  #endif
> >
> > +/*
> > +  Safe-Linking:
> > +  Use randomness from ASLR (mmap_base) to protect single-linked lists
> > +  of Fast-Bins and TCache. That is, mask the "next" pointers of the
>
> Use double space after a period.
>
> > +  lists' chunks, and also perform allocation alignment checks on them.
> > +  This mechanism reduces the risk of pointer hijacking, as was done with
> > +  Safe-Unlinking in the double-linked lists of Small-Bins.
> > +  Using ASLR_BASE_SHIFT since PAGE_SHIFT is obsolete, and I'm not sure there
> > +  is a proper static POSIX-defined way of deriving the PAGE_SIZE.
> > +*/
>
> Some systems (aarch64, powerpc64le, m68k, etc) may have different page
> size depending of the kernel configuration, thus providing a PAGE_SHIFT
> for such architecture does not make sense.  That's why it should not
> mention it. I would suggest something like:
>
>   It assumes a minimum page size of 4096 KB.  System with large pages
>   provide less entropy, although the pointer mangling still works.
>
> > +#define ASLR_BASE_SHIFT  12
> > +#define PROTECT_PTR(pos, ptr) \
> > +  ((__typeof (ptr)) ((((size_t) pos) >> ASLR_BASE_SHIFT) ^ ((size_t) ptr)))
> > +#define REVEAL_PTR(ptr)  PROTECT_PTR (&ptr, ptr)
>
> I think it would be better if we use proper static inline function
> for newer code.
>
> >
> >  /*
> >    REALLOC_ZERO_BYTES_FREES should be set if a call to
> > @@ -2157,12 +2171,15 @@ do_check_malloc_state (mstate av)
> >
> >        while (p != 0)
> >          {
> > +          if (__glibc_unlikely (!aligned_OK (p)))
> > +            malloc_printerr ("do_check_malloc_state(): " \
> > +                             "un-aligned fastbin chunk detected");
>
> 'un-aligned' seemed wrong, should it be unaligned?
>
> >            /* each chunk claims to be inuse */
> >            do_check_inuse_chunk (av, p);
> >            total += chunksize (p);
> >            /* chunk belongs in this bin */
> >            assert (fastbin_index (chunksize (p)) == i);
> > -          p = p->fd;
> > +          p = REVEAL_PTR (p->fd);
> >          }
> >      }
> >
> > @@ -2923,7 +2940,7 @@ tcache_put (mchunkptr chunk, size_t tc_idx)
> >       detect a double free.  */
> >    e->key = tcache;
> >
> > -  e->next = tcache->entries[tc_idx];
> > +  e->next = PROTECT_PTR (&e->next, tcache->entries[tc_idx]);
> >    tcache->entries[tc_idx] = e;
> >    ++(tcache->counts[tc_idx]);
> >  }> @@ -2934,9 +2951,11 @@ static __always_inline void *
> >  tcache_get (size_t tc_idx)
> >  {
> >    tcache_entry *e = tcache->entries[tc_idx];
> > -  tcache->entries[tc_idx] = e->next;
> > +  tcache->entries[tc_idx] = REVEAL_PTR (e->next);
> >    --(tcache->counts[tc_idx]);
> >    e->key = NULL;
> > +  if (__glibc_unlikely (!aligned_OK (e)))
> > +    malloc_printerr ("malloc(): un-aligned tcache chunk detected");
> >    return (void *) e;
> >  }
> >
> > @@ -2960,7 +2979,10 @@ tcache_thread_shutdown (void)
> >        while (tcache_tmp->entries[i])
> >       {
> >         tcache_entry *e = tcache_tmp->entries[i];
> > -       tcache_tmp->entries[i] = e->next;
> > +      if (__glibc_unlikely (!aligned_OK (e)))
> > +        malloc_printerr ("tcache_thread_shutdown(): " \
> > +                         "un-aligned tcache chunk detected");
> > +       tcache_tmp->entries[i] = REVEAL_PTR (e->next);
> >         __libc_free (e);
> >       }
> >      }
> > @@ -3570,8 +3592,11 @@ _int_malloc (mstate av, size_t bytes)
> >        victim = pp;                                   \
> >        if (victim == NULL)                            \
> >       break;                                          \
> > +      pp = REVEAL_PTR (victim->fd);                                      \
> > +      if (__glibc_unlikely (!aligned_OK (pp)))                           \
> > +        malloc_printerr ("malloc(): un-aligned fastbin chunk detected"); \
> >      }                                                        \
> > -  while ((pp = catomic_compare_and_exchange_val_acq (fb, victim->fd, victim)) \
> > +  while ((pp = catomic_compare_and_exchange_val_acq (fb, pp, victim)) \
> >        != victim);                                    \
> >
> >    if ((unsigned long) (nb) <= (unsigned long) (get_max_fast ()))
> > @@ -3583,8 +3608,11 @@ _int_malloc (mstate av, size_t bytes)
> >
> >        if (victim != NULL)
> >       {
> > +       if (__glibc_unlikely (!aligned_OK (victim)))
> > +         malloc_printerr ("malloc(): un-aligned fastbin chunk detected");
> > +
> >         if (SINGLE_THREAD_P)
> > -         *fb = victim->fd;
> > +         *fb = REVEAL_PTR (victim->fd);
> >         else
> >           REMOVE_FB (fb, pp, victim);
> >         if (__glibc_likely (victim != NULL))
> > @@ -3605,8 +3633,10 @@ _int_malloc (mstate av, size_t bytes)
> >                 while (tcache->counts[tc_idx] < mp_.tcache_count
> >                        && (tc_victim = *fb) != NULL)
> >                   {
> > +                   if (__glibc_unlikely (!aligned_OK (tc_victim)))
> > +                     malloc_printerr ("malloc(): un-aligned fastbin chunk detected");
> >                     if (SINGLE_THREAD_P)
> > -                     *fb = tc_victim->fd;
> > +                     *fb = REVEAL_PTR (tc_victim->fd);
> >                     else
> >                       {
> >                         REMOVE_FB (fb, pp, tc_victim);
> > @@ -4196,11 +4226,15 @@ _int_free (mstate av, mchunkptr p, int have_lock)
> >           LIBC_PROBE (memory_tcache_double_free, 2, e, tc_idx);
> >           for (tmp = tcache->entries[tc_idx];
> >                tmp;
> > -              tmp = tmp->next)
> > +              tmp = REVEAL_PTR (tmp->next))
> > +        {
> > +           if (__glibc_unlikely (!aligned_OK (tmp)))
> > +             malloc_printerr ("free(): un-aligned chunk detected in tcache 2");
> >             if (tmp == e)
> >               malloc_printerr ("free(): double free detected in tcache 2");
> >           /* If we get here, it was a coincidence.  We've wasted a
> >              few cycles, but don't abort.  */
> > +        }
> >         }
> >
> >       if (tcache->counts[tc_idx] < mp_.tcache_count)
> > @@ -4264,7 +4298,7 @@ _int_free (mstate av, mchunkptr p, int have_lock)
> >          add (i.e., double free).  */
> >       if (__builtin_expect (old == p, 0))
> >         malloc_printerr ("double free or corruption (fasttop)");
> > -     p->fd = old;
> > +     p->fd = PROTECT_PTR (&p->fd, old);
> >       *fb = p;
> >        }
> >      else
> > @@ -4274,7 +4308,8 @@ _int_free (mstate av, mchunkptr p, int have_lock)
> >            add (i.e., double free).  */
> >         if (__builtin_expect (old == p, 0))
> >           malloc_printerr ("double free or corruption (fasttop)");
> > -       p->fd = old2 = old;
> > +       old2 = old;
> > +       p->fd = PROTECT_PTR (&p->fd, old);
> >       }
> >        while ((old = catomic_compare_and_exchange_val_rel (fb, p, old2))
> >            != old2);
> > @@ -4472,13 +4507,17 @@ static void malloc_consolidate(mstate av)
> >      if (p != 0) {
> >        do {
> >       {
> > +       if (__glibc_unlikely (!aligned_OK (p)))
> > +         malloc_printerr ("malloc_consolidate(): " \
> > +                         "un-aligned fastbin chunk detected");
> > +
> >         unsigned int idx = fastbin_index (chunksize (p));
> >         if ((&fastbin (av, idx)) != fb)
> >           malloc_printerr ("malloc_consolidate(): invalid chunk size");
> >       }
> >
> >       check_inuse_chunk(av, p);
> > -     nextp = p->fd;
> > +     nextp = REVEAL_PTR (p->fd);
> >
> >       /* Slightly streamlined version of consolidation code in free() */
> >       size = chunksize (p);
> > @@ -4896,8 +4935,13 @@ int_mallinfo (mstate av, struct mallinfo *m)
> >
> >    for (i = 0; i < NFASTBINS; ++i)
> >      {
> > -      for (p = fastbin (av, i); p != 0; p = p->fd)
> > +      for (p = fastbin (av, i);
> > +           p != 0;
> > +           p = REVEAL_PTR (p->fd))
> >          {
> > +          if (__glibc_unlikely (!aligned_OK (p)))
> > +            malloc_printerr ("int_mallinfo(): " \
> > +                             "un-aligned fastbin chunk detected");
> >            ++nfastblocks;
> >            fastavail += chunksize (p);
> >          }
> > @@ -5437,8 +5481,11 @@ __malloc_info (int options, FILE *fp)
> >
> >             while (p != NULL)
> >               {
> > +          if (__glibc_unlikely (!aligned_OK (p)))
> > +            malloc_printerr ("__malloc_info(): " \
> > +                             "un-aligned fastbin chunk detected");
> >                 ++nthissize;
> > -               p = p->fd;
> > +               p = REVEAL_PTR (p->fd);
> >               }
> >
> >             fastavail += nthissize * thissize;
> > --
> > 2.17.1

[-- Attachment #2: 0001-Add-Safe-Linking-to-fastbins-and-tcache.patch --]
[-- Type: application/octet-stream, Size: 9242 bytes --]

From 73210d07ae44920b6645cf05f821069a4bfcda00 Mon Sep 17 00:00:00 2001
From: Eyal Itkin <eyalit@checkpoint.com>
Date: Fri, 20 Mar 2020 21:19:17 +0200
Subject: [PATCH] Add Safe-Linking to fastbins and tcache

Safe-Linking is a security mechanism that protects single-linked
lists (such as the fastbin and tcache) from being tampered by attackers.
The mechanism makes use of randomness from ASLR (mmap_base), and when
combined with chunk alignment integrity checks, it protects the "next"
pointers from being hijacked by an attacker.

While Safe-Unlinking protects double-linked lists (such as the small
bins), there wasn't any similar protection for attacks against
single-linked lists. This solution protects against 3 common attacks:
  * Partial pointer override: modifies the lower bytes (Little Endian)
  * Full pointer override: hijacks the pointer to an attacker's location
  * Unaligned chunks: pointing the list to an unaligned address

The design assumes an attacker doesn't know where the heap is located,
and uses the ASLR randomness to "sign" the single-linked pointers. We
mark the pointer as P and the location in which it is stored as L, and
the calculation will be:
  * PROTECT(P) := (L >> PAGE_SHIFT) XOR (P)
  * *L = PROTECT(P)

This way, the random bits from the address L (which start at the bit
in the PAGE_SHIFT position), will be merged with LSB of the stored
protected pointer. This protection layer prevents an attacker from
modifying the pointer into a controlled value.

An additional check that the chunks are MALLOC_ALIGNed adds an
important layer:
  * Attackers can't point to illegal (unaligned) memory addresses
  * Attackers must guess correctly the alignment bits

On standard 32 bit Linux machines, an attack will directly fail 7
out of 8 times, and on 64 bit machines it will fail 15 out of 16
times.

This proposed patch was benchmarked and it's effect on the overall
performance of the heap was negligible and couldn't be distinguished
from the default variance between tests on the vanilla version. A
similar protection was added to Chromium's version of TCMalloc
in 2012, and according to their documentation it had an overhead of
less than 2%.
---
 malloc/malloc.c | 73 ++++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 60 insertions(+), 13 deletions(-)

diff --git a/malloc/malloc.c b/malloc/malloc.c
index 7d7d30bb13..2e25a85029 100644
--- a/malloc/malloc.c
+++ b/malloc/malloc.c
@@ -327,6 +327,20 @@ __malloc_assert (const char *assertion, const char *file, unsigned int line,
 # define MAX_TCACHE_COUNT UINT16_MAX
 #endif
 
+/*
+  Safe-Linking:
+  Use randomness from ASLR (mmap_base) to protect single-linked lists
+  of Fast-Bins and TCache.  That is, mask the "next" pointers of the
+  lists' chunks, and also perform allocation alignment checks on them.
+  This mechanism reduces the risk of pointer hijacking, as was done with
+  Safe-Unlinking in the double-linked lists of Small-Bins.
+  It assumes a minimum page size of 4096 bytes (12 bits).  Systems with
+  larger pages provide less entropy, although the pointer mangling
+  still works.
+*/
+#define PROTECT_PTR(pos, ptr) \
+  ((__typeof (ptr)) ((((size_t) pos) >> 12) ^ ((size_t) ptr)))
+#define REVEAL_PTR(ptr)  PROTECT_PTR (&ptr, ptr)
 
 /*
   REALLOC_ZERO_BYTES_FREES should be set if a call to
@@ -2157,12 +2171,15 @@ do_check_malloc_state (mstate av)
 
       while (p != 0)
         {
+          if (__glibc_unlikely (!aligned_OK (p)))
+            malloc_printerr ("do_check_malloc_state(): " \
+                             "unaligned fastbin chunk detected");
           /* each chunk claims to be inuse */
           do_check_inuse_chunk (av, p);
           total += chunksize (p);
           /* chunk belongs in this bin */
           assert (fastbin_index (chunksize (p)) == i);
-          p = p->fd;
+          p = REVEAL_PTR (p->fd);
         }
     }
 
@@ -2923,7 +2940,7 @@ tcache_put (mchunkptr chunk, size_t tc_idx)
      detect a double free.  */
   e->key = tcache;
 
-  e->next = tcache->entries[tc_idx];
+  e->next = PROTECT_PTR (&e->next, tcache->entries[tc_idx]);
   tcache->entries[tc_idx] = e;
   ++(tcache->counts[tc_idx]);
 }
@@ -2934,9 +2951,11 @@ static __always_inline void *
 tcache_get (size_t tc_idx)
 {
   tcache_entry *e = tcache->entries[tc_idx];
-  tcache->entries[tc_idx] = e->next;
+  tcache->entries[tc_idx] = REVEAL_PTR (e->next);
   --(tcache->counts[tc_idx]);
   e->key = NULL;
+  if (__glibc_unlikely (!aligned_OK (e)))
+    malloc_printerr ("malloc(): unaligned tcache chunk detected");
   return (void *) e;
 }
 
@@ -2960,7 +2979,10 @@ tcache_thread_shutdown (void)
       while (tcache_tmp->entries[i])
 	{
 	  tcache_entry *e = tcache_tmp->entries[i];
-	  tcache_tmp->entries[i] = e->next;
+      if (__glibc_unlikely (!aligned_OK (e)))
+        malloc_printerr ("tcache_thread_shutdown(): " \
+                         "unaligned tcache chunk detected");
+	  tcache_tmp->entries[i] = REVEAL_PTR (e->next);
 	  __libc_free (e);
 	}
     }
@@ -3570,8 +3592,11 @@ _int_malloc (mstate av, size_t bytes)
       victim = pp;					\
       if (victim == NULL)				\
 	break;						\
+      pp = REVEAL_PTR (victim->fd);                                     \
+      if (__glibc_unlikely (!aligned_OK (pp)))                          \
+        malloc_printerr ("malloc(): unaligned fastbin chunk detected"); \
     }							\
-  while ((pp = catomic_compare_and_exchange_val_acq (fb, victim->fd, victim)) \
+  while ((pp = catomic_compare_and_exchange_val_acq (fb, pp, victim)) \
 	 != victim);					\
 
   if ((unsigned long) (nb) <= (unsigned long) (get_max_fast ()))
@@ -3583,8 +3608,11 @@ _int_malloc (mstate av, size_t bytes)
 
       if (victim != NULL)
 	{
+	  if (__glibc_unlikely (!aligned_OK (victim)))
+	    malloc_printerr ("malloc(): unaligned fastbin chunk detected");
+
 	  if (SINGLE_THREAD_P)
-	    *fb = victim->fd;
+	    *fb = REVEAL_PTR (victim->fd);
 	  else
 	    REMOVE_FB (fb, pp, victim);
 	  if (__glibc_likely (victim != NULL))
@@ -3605,8 +3633,10 @@ _int_malloc (mstate av, size_t bytes)
 		  while (tcache->counts[tc_idx] < mp_.tcache_count
 			 && (tc_victim = *fb) != NULL)
 		    {
+		      if (__glibc_unlikely (!aligned_OK (tc_victim)))
+			malloc_printerr ("malloc(): unaligned fastbin chunk detected");
 		      if (SINGLE_THREAD_P)
-			*fb = tc_victim->fd;
+			*fb = REVEAL_PTR (tc_victim->fd);
 		      else
 			{
 			  REMOVE_FB (fb, pp, tc_victim);
@@ -4196,11 +4226,15 @@ _int_free (mstate av, mchunkptr p, int have_lock)
 	    LIBC_PROBE (memory_tcache_double_free, 2, e, tc_idx);
 	    for (tmp = tcache->entries[tc_idx];
 		 tmp;
-		 tmp = tmp->next)
+		 tmp = REVEAL_PTR (tmp->next))
+        {
+	      if (__glibc_unlikely (!aligned_OK (tmp)))
+		malloc_printerr ("free(): unaligned chunk detected in tcache 2");
 	      if (tmp == e)
 		malloc_printerr ("free(): double free detected in tcache 2");
 	    /* If we get here, it was a coincidence.  We've wasted a
 	       few cycles, but don't abort.  */
+        }
 	  }
 
 	if (tcache->counts[tc_idx] < mp_.tcache_count)
@@ -4264,7 +4298,7 @@ _int_free (mstate av, mchunkptr p, int have_lock)
 	   add (i.e., double free).  */
 	if (__builtin_expect (old == p, 0))
 	  malloc_printerr ("double free or corruption (fasttop)");
-	p->fd = old;
+	p->fd = PROTECT_PTR (&p->fd, old);
 	*fb = p;
       }
     else
@@ -4274,7 +4308,8 @@ _int_free (mstate av, mchunkptr p, int have_lock)
 	     add (i.e., double free).  */
 	  if (__builtin_expect (old == p, 0))
 	    malloc_printerr ("double free or corruption (fasttop)");
-	  p->fd = old2 = old;
+	  old2 = old;
+	  p->fd = PROTECT_PTR (&p->fd, old);
 	}
       while ((old = catomic_compare_and_exchange_val_rel (fb, p, old2))
 	     != old2);
@@ -4472,13 +4507,17 @@ static void malloc_consolidate(mstate av)
     if (p != 0) {
       do {
 	{
+	  if (__glibc_unlikely (!aligned_OK (p)))
+	    malloc_printerr ("malloc_consolidate(): " \
+                         "unaligned fastbin chunk detected");
+
 	  unsigned int idx = fastbin_index (chunksize (p));
 	  if ((&fastbin (av, idx)) != fb)
 	    malloc_printerr ("malloc_consolidate(): invalid chunk size");
 	}
 
 	check_inuse_chunk(av, p);
-	nextp = p->fd;
+	nextp = REVEAL_PTR (p->fd);
 
 	/* Slightly streamlined version of consolidation code in free() */
 	size = chunksize (p);
@@ -4896,8 +4935,13 @@ int_mallinfo (mstate av, struct mallinfo *m)
 
   for (i = 0; i < NFASTBINS; ++i)
     {
-      for (p = fastbin (av, i); p != 0; p = p->fd)
+      for (p = fastbin (av, i);
+           p != 0;
+           p = REVEAL_PTR (p->fd))
         {
+          if (__glibc_unlikely (!aligned_OK (p)))
+            malloc_printerr ("int_mallinfo(): " \
+                             "unaligned fastbin chunk detected");
           ++nfastblocks;
           fastavail += chunksize (p);
         }
@@ -5437,8 +5481,11 @@ __malloc_info (int options, FILE *fp)
 
 	      while (p != NULL)
 		{
+          if (__glibc_unlikely (!aligned_OK (p)))
+            malloc_printerr ("__malloc_info(): " \
+                             "unaligned fastbin chunk detected");
 		  ++nthissize;
-		  p = p->fd;
+		  p = REVEAL_PTR (p->fd);
 		}
 
 	      fastavail += nthissize * thissize;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-03-20 19:45           ` Eyal Itkin
@ 2020-03-20 23:58             ` DJ Delorie
  2020-03-21  2:47             ` Carlos O'Donell
  2020-03-30  8:01             ` Andreas Schwab
  2 siblings, 0 replies; 33+ messages in thread
From: DJ Delorie @ 2020-03-20 23:58 UTC (permalink / raw)
  To: Eyal Itkin; +Cc: adhemerval.zanella, libc-alpha


This last patch looks good to me.  Any further comments from anyone else?

Reviewed-by: DJ Delorie <dj@redhat.com>


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-03-20 19:45           ` Eyal Itkin
  2020-03-20 23:58             ` DJ Delorie
@ 2020-03-21  2:47             ` Carlos O'Donell
  2020-03-23 13:52               ` Adhemerval Zanella
  2020-03-29 17:04               ` Carlos O'Donell
  2020-03-30  8:01             ` Andreas Schwab
  2 siblings, 2 replies; 33+ messages in thread
From: Carlos O'Donell @ 2020-03-21  2:47 UTC (permalink / raw)
  To: Eyal Itkin, Adhemerval Zanella, DJ Delorie, Wilco Dijkstra,
	Florian Weimer
  Cc: GNU C Library

On 3/20/20 3:45 PM, Eyal Itkin via Libc-alpha wrote:
> I really am sorry to send the patch as an attachment again, however I
> had too many difficulties in using git's send-email and convincing him
> to send it as a proper reply to this ongoing thread. I wish Gmail's
> client won't auto-wrap the lines and break the patch, but sadly they
> refuse to implement this feature for over a decade now.
> 
> Regarding the previous feedback, I implemented all of the comments,
> aside from transforming the macros to static inline functions.
> 
> I did try to switch to static functions, but this meant that I had to
> change back the signature of REVEAL_PTR to once again receive two
> arguments for "pos" and "ptr", as using &ptr could no longer work.
> Since this change would have contradicted the previous comment about
> the readability of the code, I undid the change and stayed with macros
> in sake of readability. Using macros also have the benefit of using
> correctly typed objects instead of generic void* types in the api of
> the functions.

Sometimes you will get conflicting advice from reviewers. We are not
a homogeneous set of individuals ;-)

I like that we might extend Safe-Linking to all lists, including the
exit_function_list which is a prime target of attack since it has
a singly linked list and could be written to in order to execute
arbitrary function pointers (requires also the probability of guessing
PTR_MANGLE results). Unlike malloc, in other less performance
sensitive functions we can use bits of randomness from a global
variable.

Reviews so far:
- Wilco Dijkstra (Arm)
- DJ Delorie (Red Hat)
- Adhemerval Zanella (Linaro)
- Carlos O'Donell (Red Hat)

If Adhemerval, DJ, and Wilco want to give their Reviewed-by, then I'll
add those too.

I see one remaining issue to discuss and it is the emacs dumped heap
state. The rest looks good to me too.

I'm TO'ing Florian here in case he wants to comment. A few years ago
(time flies! 2016!) Florian had an idea to protect the chunk metadata
with xor and some randomness (size and prevsize):
https://sourceware.org/legacy-ml/libc-alpha/2016-10/msg00531.html

So the emacs dumped heap issue is that emacs would previously use
malloc_get_state to dump the heap, and the restore it again with
malloc_set_State.  This was a serious impediment to improving the
malloc implementation and so we stopped supporting malloc_get_state,
but there are still emacs binaries that use malloc_set_state to
restore a dumped heap.

The way we restore a dumped heap means we need to be careful we
don't restore anything that we might expect should have been been
protected with Safe-Linking.

I have reviewed the code in hooks.c, particularly malloc_set_state
and we iterate via chunk sizes so nothing is looking at the forward
pointer, and even better we make them all into fake mmap'd chunks
and never free them. Therefore I think we should be fine with the
dumped emacs state.

If nobody objects I'll push this next week after giving people some
more time to review.

Lastly, do you think it's possible to write a test case that
exercise the failure for tcache and fastbins? You can see in
malloc/tst-dynarray-at-fail.c an example where the test runs in
a subprocess and it's stderr is compared to see if it triggers
the expected failure. It would be good to know the code we expect
to protect the chunk is functioning correctly.

The test case an be reviewed as a distinct commit if we want to
keep things moving.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>

> From 73210d07ae44920b6645cf05f821069a4bfcda00 Mon Sep 17 00:00:00 2001
> From: Eyal Itkin <eyalit@checkpoint.com>
> Date: Fri, 20 Mar 2020 21:19:17 +0200
> Subject: [PATCH] Add Safe-Linking to fastbins and tcache
> 
> Safe-Linking is a security mechanism that protects single-linked
> lists (such as the fastbin and tcache) from being tampered by attackers.
> The mechanism makes use of randomness from ASLR (mmap_base), and when
> combined with chunk alignment integrity checks, it protects the "next"
> pointers from being hijacked by an attacker.
> 
> While Safe-Unlinking protects double-linked lists (such as the small
> bins), there wasn't any similar protection for attacks against
> single-linked lists. This solution protects against 3 common attacks:
>   * Partial pointer override: modifies the lower bytes (Little Endian)
>   * Full pointer override: hijacks the pointer to an attacker's location
>   * Unaligned chunks: pointing the list to an unaligned address
> 
> The design assumes an attacker doesn't know where the heap is located,
> and uses the ASLR randomness to "sign" the single-linked pointers. We
> mark the pointer as P and the location in which it is stored as L, and
> the calculation will be:
>   * PROTECT(P) := (L >> PAGE_SHIFT) XOR (P)
>   * *L = PROTECT(P)
> 
> This way, the random bits from the address L (which start at the bit
> in the PAGE_SHIFT position), will be merged with LSB of the stored
> protected pointer. This protection layer prevents an attacker from
> modifying the pointer into a controlled value.
> 
> An additional check that the chunks are MALLOC_ALIGNed adds an
> important layer:
>   * Attackers can't point to illegal (unaligned) memory addresses
>   * Attackers must guess correctly the alignment bits
> 
> On standard 32 bit Linux machines, an attack will directly fail 7
> out of 8 times, and on 64 bit machines it will fail 15 out of 16
> times.
> 
> This proposed patch was benchmarked and it's effect on the overall
> performance of the heap was negligible and couldn't be distinguished
> from the default variance between tests on the vanilla version. A
> similar protection was added to Chromium's version of TCMalloc
> in 2012, and according to their documentation it had an overhead of
> less than 2%.
> ---
>  malloc/malloc.c | 73 ++++++++++++++++++++++++++++++++++++++++---------
>  1 file changed, 60 insertions(+), 13 deletions(-)
> 
> diff --git a/malloc/malloc.c b/malloc/malloc.c
> index 7d7d30bb13..2e25a85029 100644
> --- a/malloc/malloc.c
> +++ b/malloc/malloc.c
> @@ -327,6 +327,20 @@ __malloc_assert (const char *assertion, const char *file, unsigned int line,
>  # define MAX_TCACHE_COUNT UINT16_MAX
>  #endif
>  
> +/*
> +  Safe-Linking:
> +  Use randomness from ASLR (mmap_base) to protect single-linked lists
> +  of Fast-Bins and TCache.  That is, mask the "next" pointers of the
> +  lists' chunks, and also perform allocation alignment checks on them.
> +  This mechanism reduces the risk of pointer hijacking, as was done with
> +  Safe-Unlinking in the double-linked lists of Small-Bins.
> +  It assumes a minimum page size of 4096 bytes (12 bits).  Systems with

OK. Comment explains the magic number 12 below.

> +  larger pages provide less entropy, although the pointer mangling
> +  still works.
> +*/

This should be:

/* Safe-Linking:
   ...  */

To follow comment style for the project (not the style of malloc's older code).

I've fixed this up.

I also fixed up some tab vs. spaces.

> +#define PROTECT_PTR(pos, ptr) \
> +  ((__typeof (ptr)) ((((size_t) pos) >> 12) ^ ((size_t) ptr)))
> +#define REVEAL_PTR(ptr)  PROTECT_PTR (&ptr, ptr)
>  
>  /*
>    REALLOC_ZERO_BYTES_FREES should be set if a call to
> @@ -2157,12 +2171,15 @@ do_check_malloc_state (mstate av)
>  
>        while (p != 0)
>          {
> +          if (__glibc_unlikely (!aligned_OK (p)))
> +            malloc_printerr ("do_check_malloc_state(): " \
> +                             "unaligned fastbin chunk detected");
OK.

>            /* each chunk claims to be inuse */
>            do_check_inuse_chunk (av, p);
>            total += chunksize (p);
>            /* chunk belongs in this bin */
>            assert (fastbin_index (chunksize (p)) == i);
> -          p = p->fd;
> +          p = REVEAL_PTR (p->fd);

OK.

>          }
>      }
>  
> @@ -2923,7 +2940,7 @@ tcache_put (mchunkptr chunk, size_t tc_idx)
>       detect a double free.  */
>    e->key = tcache;
>  
> -  e->next = tcache->entries[tc_idx];
> +  e->next = PROTECT_PTR (&e->next, tcache->entries[tc_idx]);

OK.

>    tcache->entries[tc_idx] = e;
>    ++(tcache->counts[tc_idx]);
>  }
> @@ -2934,9 +2951,11 @@ static __always_inline void *
>  tcache_get (size_t tc_idx)
>  {
>    tcache_entry *e = tcache->entries[tc_idx];
> -  tcache->entries[tc_idx] = e->next;
> +  tcache->entries[tc_idx] = REVEAL_PTR (e->next);
>    --(tcache->counts[tc_idx]);
>    e->key = NULL;
> +  if (__glibc_unlikely (!aligned_OK (e)))
> +    malloc_printerr ("malloc(): unaligned tcache chunk detected");

OK.

>    return (void *) e;
>  }
>  
> @@ -2960,7 +2979,10 @@ tcache_thread_shutdown (void)
>        while (tcache_tmp->entries[i])
>  	{
>  	  tcache_entry *e = tcache_tmp->entries[i];
> -	  tcache_tmp->entries[i] = e->next;
> +      if (__glibc_unlikely (!aligned_OK (e)))
> +        malloc_printerr ("tcache_thread_shutdown(): " \
> +                         "unaligned tcache chunk detected");
> +	  tcache_tmp->entries[i] = REVEAL_PTR (e->next);

OK.

>  	  __libc_free (e);
>  	}
>      }
> @@ -3570,8 +3592,11 @@ _int_malloc (mstate av, size_t bytes)
>        victim = pp;					\
>        if (victim == NULL)				\
>  	break;						\
> +      pp = REVEAL_PTR (victim->fd);                                     \
> +      if (__glibc_unlikely (!aligned_OK (pp)))                          \
> +        malloc_printerr ("malloc(): unaligned fastbin chunk detected"); \

OK.

>      }							\
> -  while ((pp = catomic_compare_and_exchange_val_acq (fb, victim->fd, victim)) \
> +  while ((pp = catomic_compare_and_exchange_val_acq (fb, pp, victim)) \

OK.

>  	 != victim);					\
>  
>    if ((unsigned long) (nb) <= (unsigned long) (get_max_fast ()))
> @@ -3583,8 +3608,11 @@ _int_malloc (mstate av, size_t bytes)
>  
>        if (victim != NULL)
>  	{
> +	  if (__glibc_unlikely (!aligned_OK (victim)))
> +	    malloc_printerr ("malloc(): unaligned fastbin chunk detected");
> +
>  	  if (SINGLE_THREAD_P)
> -	    *fb = victim->fd;
> +	    *fb = REVEAL_PTR (victim->fd);

OK.

>  	  else
>  	    REMOVE_FB (fb, pp, victim);
>  	  if (__glibc_likely (victim != NULL))
> @@ -3605,8 +3633,10 @@ _int_malloc (mstate av, size_t bytes)
>  		  while (tcache->counts[tc_idx] < mp_.tcache_count
>  			 && (tc_victim = *fb) != NULL)
>  		    {
> +		      if (__glibc_unlikely (!aligned_OK (tc_victim)))
> +			malloc_printerr ("malloc(): unaligned fastbin chunk detected");
>  		      if (SINGLE_THREAD_P)
> -			*fb = tc_victim->fd;
> +			*fb = REVEAL_PTR (tc_victim->fd);

OK.

>  		      else
>  			{
>  			  REMOVE_FB (fb, pp, tc_victim);
> @@ -4196,11 +4226,15 @@ _int_free (mstate av, mchunkptr p, int have_lock)
>  	    LIBC_PROBE (memory_tcache_double_free, 2, e, tc_idx);
>  	    for (tmp = tcache->entries[tc_idx];
>  		 tmp;
> -		 tmp = tmp->next)
> +		 tmp = REVEAL_PTR (tmp->next))
> +        {
> +	      if (__glibc_unlikely (!aligned_OK (tmp)))
> +		malloc_printerr ("free(): unaligned chunk detected in tcache 2");

OK.

>  	      if (tmp == e)
>  		malloc_printerr ("free(): double free detected in tcache 2");
>  	    /* If we get here, it was a coincidence.  We've wasted a
>  	       few cycles, but don't abort.  */
> +        }

OK.

>  	  }
>  
>  	if (tcache->counts[tc_idx] < mp_.tcache_count)
> @@ -4264,7 +4298,7 @@ _int_free (mstate av, mchunkptr p, int have_lock)
>  	   add (i.e., double free).  */
>  	if (__builtin_expect (old == p, 0))
>  	  malloc_printerr ("double free or corruption (fasttop)");
> -	p->fd = old;
> +	p->fd = PROTECT_PTR (&p->fd, old);

OK.

>  	*fb = p;
>        }
>      else
> @@ -4274,7 +4308,8 @@ _int_free (mstate av, mchunkptr p, int have_lock)
>  	     add (i.e., double free).  */
>  	  if (__builtin_expect (old == p, 0))
>  	    malloc_printerr ("double free or corruption (fasttop)");
> -	  p->fd = old2 = old;
> +	  old2 = old;
> +	  p->fd = PROTECT_PTR (&p->fd, old);

OK.

>  	}
>        while ((old = catomic_compare_and_exchange_val_rel (fb, p, old2))
>  	     != old2);
> @@ -4472,13 +4507,17 @@ static void malloc_consolidate(mstate av)
>      if (p != 0) {
>        do {
>  	{
> +	  if (__glibc_unlikely (!aligned_OK (p)))
> +	    malloc_printerr ("malloc_consolidate(): " \
> +                         "unaligned fastbin chunk detected");

OK.

> +
>  	  unsigned int idx = fastbin_index (chunksize (p));
>  	  if ((&fastbin (av, idx)) != fb)
>  	    malloc_printerr ("malloc_consolidate(): invalid chunk size");
>  	}
>  
>  	check_inuse_chunk(av, p);
> -	nextp = p->fd;
> +	nextp = REVEAL_PTR (p->fd);

OK.

>  
>  	/* Slightly streamlined version of consolidation code in free() */
>  	size = chunksize (p);
> @@ -4896,8 +4935,13 @@ int_mallinfo (mstate av, struct mallinfo *m)
>  
>    for (i = 0; i < NFASTBINS; ++i)
>      {
> -      for (p = fastbin (av, i); p != 0; p = p->fd)
> +      for (p = fastbin (av, i);
> +           p != 0;
> +           p = REVEAL_PTR (p->fd))
>          {
> +          if (__glibc_unlikely (!aligned_OK (p)))
> +            malloc_printerr ("int_mallinfo(): " \
> +                             "unaligned fastbin chunk detected");

OK.

>            ++nfastblocks;
>            fastavail += chunksize (p);
>          }
> @@ -5437,8 +5481,11 @@ __malloc_info (int options, FILE *fp)
>  
>  	      while (p != NULL)
>  		{
> +          if (__glibc_unlikely (!aligned_OK (p)))
> +            malloc_printerr ("__malloc_info(): " \
> +                             "unaligned fastbin chunk detected");

OK.

>  		  ++nthissize;
> -		  p = p->fd;
> +		  p = REVEAL_PTR (p->fd);

OK.

>  		}
>  
>  	      fastavail += nthissize * thissize;
> -- 
> 2.17.1
> 


-- 
Cheers,
Carlos.


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-03-21  2:47             ` Carlos O'Donell
@ 2020-03-23 13:52               ` Adhemerval Zanella
  2020-03-23 16:11                 ` DJ Delorie
  2020-03-29 17:04               ` Carlos O'Donell
  1 sibling, 1 reply; 33+ messages in thread
From: Adhemerval Zanella @ 2020-03-23 13:52 UTC (permalink / raw)
  To: Carlos O'Donell, Eyal Itkin, DJ Delorie, Wilco Dijkstra,
	Florian Weimer
  Cc: GNU C Library



On 20/03/2020 23:47, Carlos O'Donell wrote:
> I like that we might extend Safe-Linking to all lists, including the
> exit_function_list which is a prime target of attack since it has
> a singly linked list and could be written to in order to execute
> arbitrary function pointers (requires also the probability of guessing
> PTR_MANGLE results). Unlike malloc, in other less performance
> sensitive functions we can use bits of randomness from a global
> variable.
> 
> Reviews so far:
> - Wilco Dijkstra (Arm)
> - DJ Delorie (Red Hat)
> - Adhemerval Zanella (Linaro)
> - Carlos O'Donell (Red Hat)
> 
> If Adhemerval, DJ, and Wilco want to give their Reviewed-by, then I'll
> add those too.

Ok, thanks.

> 
> I see one remaining issue to discuss and it is the emacs dumped heap
> state. The rest looks good to me too.
> 
> I'm TO'ing Florian here in case he wants to comment. A few years ago
> (time flies! 2016!) Florian had an idea to protect the chunk metadata
> with xor and some randomness (size and prevsize):
> https://sourceware.org/legacy-ml/libc-alpha/2016-10/msg00531.html

I forgot about this one.  With ASLR entropy we are bounded by two
factor: 

  1. architecture must provide ARCH_HAS_ELF_RANDOMIZE.
  2. the mmap entropy provided bits.

For 1. it seems not all architectures does provide it (based on
kernel Documentation/features/vm/ELF-ASLR/arch-support.txt with
some edits to remove architecture not support by glibc), which
makes this mitigation work solely on such architectures.

    -----------------------
    |         arch |status|
    -----------------------
    |       alpha: | TODO |
    |         arc: | TODO |
    |         arm: |  ok  |
    |       arm64: |  ok  |
    |        csky: | TODO |
    |        ia64: | TODO |
    |        m68k: | TODO |
    |  microblaze: | TODO |
    |        mips: |  ok  |
    |       nios2: | TODO |
    |      parisc: |  ok  |
    |     powerpc: |  ok  |
    |       riscv: |  ok  |
    |        s390: |  ok  |
    |          sh: | TODO |
    |       sparc: |  ok  |
    |         x86: |  ok  |
    -----------------------

(Linux internal documentation is in fact outdated, both sparc and
riscv does support ASLR).

Also, the mmap entropy vary largely over architectures and kernel
configurations, as table below where I extended the information 
compiled on BZ#22853 .  Some architectures also provides a kernel
config that can increase the maximum mmpa random bits (which is
only accessible by admin and system-defined).

  +-----------------+----------------+-------------------+
  | System          |  mmap rnd bits | MMAP_RND_BITS_MAX |
  +-----------------+----------------+-------------------+
  | x86_64          |        28      |            32     |
  | i386 / x32      |         8      |            16     |
  | arm             |         8 [2]  |            16 [3] |
  | aarch64         |        18 [4]  |            33 [5] |
  | powerpc64       |        14 [6]  |            29 [7] |     
  | powerpc         |         7 [8]  |            13 [9] |
  | s390x           |        11      |             -     |
  | s390            |        11      |             -     |
  | hppa            |        13      |             -     |
  | mips64          |        12      |            18     |
  | mips            |         8      |            15     |
  | riscv64         |        18      |            24     |
  | riscv32         |         8      |            17     |
  | sparc64         |        16      |             -     |
  | sparc32         |        11      |             -     |  
  +-----------------+----------------+-------------------+


So although this patch does help improve the situation, I think we should
move to a more robust and arch neutral way to obtain entropy for malloc
hardnening. 

Florian suggestion basically uses AT_RANDOM which provides 128-bits.
It is *way better* than rely on ASLR, but I am not sure if SHA256
the AT_RANDOM does improve things significantly, specially over the 
tradeoff of text and data increase on loader (on some architecture we 
might replace with hardware optimizations).

 [1] https://sourceware.org/pipermail/libc-alpha/2016-October/075546.html

 [2] 15 for 16k pages, and 14 for 64k pages.
 [3] 14 for PAGE_OFFSET=0x40000000 and 15 for PAGE_OFFSET=0x80000000.
 [4] 18 for 4k pages.
 [5] 19 for 36-bit VMA, 24 for 39-bit VMA, 27 for 42-bit VMA,
    30 for 47-bit VMA, 29 for 48-bit VMA / 64k pages, 31 for
    48-bit and 16k pages, 31 for 47-bit VMA / 4k pages, 14 for 64k
    pages, and 16 for 16k pages.
 [6] 18 for 4k pages.
 [7] 33 for 4k pages.
 [8] 5 for 256k pages, 9 for 16k pages, and 11 for 4k pages
 [9] 11 for 256k pages, 15 for 16k pages, and 17 for 4k pages.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-03-23 13:52               ` Adhemerval Zanella
@ 2020-03-23 16:11                 ` DJ Delorie
  2020-03-23 16:23                   ` Carlos O'Donell
                                     ` (2 more replies)
  0 siblings, 3 replies; 33+ messages in thread
From: DJ Delorie @ 2020-03-23 16:11 UTC (permalink / raw)
  To: Adhemerval Zanella
  Cc: carlos, eyal.itkin, Wilco.Dijkstra, fweimer, libc-alpha

Adhemerval Zanella <adhemerval.zanella@linaro.org> writes:
> So although this patch does help improve the situation, I think we should
> move to a more robust and arch neutral way to obtain entropy for malloc
> hardnening. 

Can we agree to approve this patch, and replace the "12" with something
better later?  That way we'll at least make incremental progress.


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-03-23 16:11                 ` DJ Delorie
@ 2020-03-23 16:23                   ` Carlos O'Donell
  2020-03-23 16:26                   ` Adhemerval Zanella
  2020-03-24 13:44                   ` Wilco Dijkstra
  2 siblings, 0 replies; 33+ messages in thread
From: Carlos O'Donell @ 2020-03-23 16:23 UTC (permalink / raw)
  To: DJ Delorie, Adhemerval Zanella
  Cc: eyal.itkin, Wilco.Dijkstra, fweimer, libc-alpha

On 3/23/20 12:11 PM, DJ Delorie wrote:
> Adhemerval Zanella <adhemerval.zanella@linaro.org> writes:
>> So although this patch does help improve the situation, I think we should
>> move to a more robust and arch neutral way to obtain entropy for malloc
>> hardnening. 
> 
> Can we agree to approve this patch, and replace the "12" with something
> better later?  That way we'll at least make incremental progress.
 
I agree. 

I'm fine with using ASLR/12-shift to get entropy. 

I think the next logical step is to revive Florian's patch to harden prev
and prevsize and storing entropy in a global.

So I see this as two steps.

-- 
Cheers,
Carlos.


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-03-23 16:11                 ` DJ Delorie
  2020-03-23 16:23                   ` Carlos O'Donell
@ 2020-03-23 16:26                   ` Adhemerval Zanella
  2020-03-24 13:44                   ` Wilco Dijkstra
  2 siblings, 0 replies; 33+ messages in thread
From: Adhemerval Zanella @ 2020-03-23 16:26 UTC (permalink / raw)
  To: DJ Delorie; +Cc: carlos, eyal.itkin, Wilco.Dijkstra, fweimer, libc-alpha



On 23/03/2020 13:11, DJ Delorie wrote:
> Adhemerval Zanella <adhemerval.zanella@linaro.org> writes:
>> So although this patch does help improve the situation, I think we should
>> move to a more robust and arch neutral way to obtain entropy for malloc
>> hardnening. 
> 
> Can we agree to approve this patch, and replace the "12" with something
> better later?  That way we'll at least make incremental progress.
> 

Sorry if I wasn't clear, I am ok this patch (I reply with OK to add
my reviewed-by).

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-03-23 16:11                 ` DJ Delorie
  2020-03-23 16:23                   ` Carlos O'Donell
  2020-03-23 16:26                   ` Adhemerval Zanella
@ 2020-03-24 13:44                   ` Wilco Dijkstra
  2 siblings, 0 replies; 33+ messages in thread
From: Wilco Dijkstra @ 2020-03-24 13:44 UTC (permalink / raw)
  To: DJ Delorie, Adhemerval Zanella; +Cc: carlos, eyal.itkin, fweimer, libc-alpha

Hi,

>Adhemerval Zanella <adhemerval.zanella@linaro.org> writes:
>> So although this patch does help improve the situation, I think we should
>> move to a more robust and arch neutral way to obtain entropy for malloc
>> hardnening. 

Well I'm wondering whether it will ever be possible to harden properly - the key
problem is storing internal pointers in user data which may be corrupted. A more
safe approach would be to use separate arrays/bitmaps to store the free list.
This may even be faster due to better locality.

> Can we agree to approve this patch, and replace the "12" with something
> better later?  That way we'll at least make incremental progress.

Agreed. The latest version of the patch looks good to me.

Wilco

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-03-21  2:47             ` Carlos O'Donell
  2020-03-23 13:52               ` Adhemerval Zanella
@ 2020-03-29 17:04               ` Carlos O'Donell
  2020-03-30  6:30                 ` Eyal Itkin
  2020-03-30  7:56                 ` Andreas Schwab
  1 sibling, 2 replies; 33+ messages in thread
From: Carlos O'Donell @ 2020-03-29 17:04 UTC (permalink / raw)
  To: Eyal Itkin, Adhemerval Zanella, DJ Delorie, Wilco Dijkstra,
	Florian Weimer
  Cc: GNU C Library

On 3/20/20 10:47 PM, Carlos O'Donell wrote:
> If nobody objects I'll push this next week after giving people some
> more time to review.

Pushed.

Eyal, Thank you for the high quality submission.

-- 
Cheers,
Carlos.


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-03-29 17:04               ` Carlos O'Donell
@ 2020-03-30  6:30                 ` Eyal Itkin
  2020-03-30  7:56                 ` Andreas Schwab
  1 sibling, 0 replies; 33+ messages in thread
From: Eyal Itkin @ 2020-03-30  6:30 UTC (permalink / raw)
  To: Carlos O'Donell
  Cc: Adhemerval Zanella, DJ Delorie, Wilco Dijkstra, Florian Weimer,
	GNU C Library

Thanks a lot for the help with this patch, I really appreciate it.

I am now working on converting my tests for Safe-Linking into a test
suitable for glibc's testsuite, and I will submit it as an additional
patch, as was suggested by Carlos.
Thanks again,
Eyal.


On Sun, Mar 29, 2020 at 8:04 PM Carlos O'Donell <carlos@redhat.com> wrote:
>
> On 3/20/20 10:47 PM, Carlos O'Donell wrote:
> > If nobody objects I'll push this next week after giving people some
> > more time to review.
>
> Pushed.
>
> Eyal, Thank you for the high quality submission.
>
> --
> Cheers,
> Carlos.
>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-03-29 17:04               ` Carlos O'Donell
  2020-03-30  6:30                 ` Eyal Itkin
@ 2020-03-30  7:56                 ` Andreas Schwab
  1 sibling, 0 replies; 33+ messages in thread
From: Andreas Schwab @ 2020-03-30  7:56 UTC (permalink / raw)
  To: Carlos O'Donell via Libc-alpha
  Cc: Eyal Itkin, Adhemerval Zanella, DJ Delorie, Wilco Dijkstra,
	Florian Weimer, Carlos O'Donell

https://build.opensuse.org/package/live_build_log/home:Andreas_Schwab:glibc/glibc/f/i586
https://build.opensuse.org/package/live_build_log/home:Andreas_Schwab:glibc/glibc/p/ppc

[ 1025s] malloc_consolidate(): unaligned fastbin chunk detected

Andreas.

-- 
Andreas Schwab, SUSE Labs, schwab@suse.de
GPG Key fingerprint = 0196 BAD8 1CE9 1970 F4BE  1748 E4D4 88E3 0EEA B9D7
"And now for something completely different."

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-03-20 19:45           ` Eyal Itkin
  2020-03-20 23:58             ` DJ Delorie
  2020-03-21  2:47             ` Carlos O'Donell
@ 2020-03-30  8:01             ` Andreas Schwab
  2020-03-30  8:44               ` Eyal Itkin
  2 siblings, 1 reply; 33+ messages in thread
From: Andreas Schwab @ 2020-03-30  8:01 UTC (permalink / raw)
  To: Eyal Itkin via Libc-alpha

On Mär 20 2020, Eyal Itkin via Libc-alpha wrote:

> @@ -2960,7 +2979,10 @@ tcache_thread_shutdown (void)
>        while (tcache_tmp->entries[i])
>  	{
>  	  tcache_entry *e = tcache_tmp->entries[i];
> -	  tcache_tmp->entries[i] = e->next;
> +      if (__glibc_unlikely (!aligned_OK (e)))
> +        malloc_printerr ("tcache_thread_shutdown(): " \
> +                         "unaligned tcache chunk detected");
> +	  tcache_tmp->entries[i] = REVEAL_PTR (e->next);
>  	  __libc_free (e);

Wrong indentation, extra backslash.

> @@ -4196,11 +4226,15 @@ _int_free (mstate av, mchunkptr p, int have_lock)
>  	    LIBC_PROBE (memory_tcache_double_free, 2, e, tc_idx);
>  	    for (tmp = tcache->entries[tc_idx];
>  		 tmp;
> -		 tmp = tmp->next)
> +		 tmp = REVEAL_PTR (tmp->next))
> +        {
> +	      if (__glibc_unlikely (!aligned_OK (tmp)))
> +		malloc_printerr ("free(): unaligned chunk detected in tcache 2");
>  	      if (tmp == e)
>  		malloc_printerr ("free(): double free detected in tcache 2");
>  	    /* If we get here, it was a coincidence.  We've wasted a
>  	       few cycles, but don't abort.  */
> +        }

Wrong indentation.

> @@ -4896,8 +4935,13 @@ int_mallinfo (mstate av, struct mallinfo *m)
>  
>    for (i = 0; i < NFASTBINS; ++i)
>      {
> -      for (p = fastbin (av, i); p != 0; p = p->fd)
> +      for (p = fastbin (av, i);
> +           p != 0;
> +           p = REVEAL_PTR (p->fd))
>          {
> +          if (__glibc_unlikely (!aligned_OK (p)))
> +            malloc_printerr ("int_mallinfo(): " \
> +                             "unaligned fastbin chunk detected");

Extra backslash.

> @@ -5437,8 +5481,11 @@ __malloc_info (int options, FILE *fp)
>  
>  	      while (p != NULL)
>  		{
> +          if (__glibc_unlikely (!aligned_OK (p)))
> +            malloc_printerr ("__malloc_info(): " \
> +                             "unaligned fastbin chunk detected");
>  		  ++nthissize;

Wrong indentation, extra backslash.

Andreas.

-- 
Andreas Schwab, SUSE Labs, schwab@suse.de
GPG Key fingerprint = 0196 BAD8 1CE9 1970 F4BE  1748 E4D4 88E3 0EEA B9D7
"And now for something completely different."

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-03-30  8:01             ` Andreas Schwab
@ 2020-03-30  8:44               ` Eyal Itkin
  2020-03-30 13:56                 ` Carlos O'Donell
  0 siblings, 1 reply; 33+ messages in thread
From: Eyal Itkin @ 2020-03-30  8:44 UTC (permalink / raw)
  To: Andreas Schwab; +Cc: Eyal Itkin via Libc-alpha, Adhemerval Zanella

Sorry about the indentations. The file has mixed tabs and spaces, sometimes
even in the same code line, which makes it hard to know what's right. In
each case I tried to mimick the behavior of the lines before / after my
addition.

Regarding the new issue in malloc_consolidate, I'm going over it now as it
didn't happen on my intel x64 environment. Will update you soon about it.


On Mon, 30 Mar 2020, 11:01 Andreas Schwab, <schwab@suse.de> wrote:

> On Mär 20 2020, Eyal Itkin via Libc-alpha wrote:
>
> > @@ -2960,7 +2979,10 @@ tcache_thread_shutdown (void)
> >        while (tcache_tmp->entries[i])
> >       {
> >         tcache_entry *e = tcache_tmp->entries[i];
> > -       tcache_tmp->entries[i] = e->next;
> > +      if (__glibc_unlikely (!aligned_OK (e)))
> > +        malloc_printerr ("tcache_thread_shutdown(): " \
> > +                         "unaligned tcache chunk detected");
> > +       tcache_tmp->entries[i] = REVEAL_PTR (e->next);
> >         __libc_free (e);
>
> Wrong indentation, extra backslash.
>
> > @@ -4196,11 +4226,15 @@ _int_free (mstate av, mchunkptr p, int have_lock)
> >           LIBC_PROBE (memory_tcache_double_free, 2, e, tc_idx);
> >           for (tmp = tcache->entries[tc_idx];
> >                tmp;
> > -              tmp = tmp->next)
> > +              tmp = REVEAL_PTR (tmp->next))
> > +        {
> > +           if (__glibc_unlikely (!aligned_OK (tmp)))
> > +             malloc_printerr ("free(): unaligned chunk detected in
> tcache 2");
> >             if (tmp == e)
> >               malloc_printerr ("free(): double free detected in tcache
> 2");
> >           /* If we get here, it was a coincidence.  We've wasted a
> >              few cycles, but don't abort.  */
> > +        }
>
> Wrong indentation.
>
> > @@ -4896,8 +4935,13 @@ int_mallinfo (mstate av, struct mallinfo *m)
> >
> >    for (i = 0; i < NFASTBINS; ++i)
> >      {
> > -      for (p = fastbin (av, i); p != 0; p = p->fd)
> > +      for (p = fastbin (av, i);
> > +           p != 0;
> > +           p = REVEAL_PTR (p->fd))
> >          {
> > +          if (__glibc_unlikely (!aligned_OK (p)))
> > +            malloc_printerr ("int_mallinfo(): " \
> > +                             "unaligned fastbin chunk detected");
>
> Extra backslash.
>
> > @@ -5437,8 +5481,11 @@ __malloc_info (int options, FILE *fp)
> >
> >             while (p != NULL)
> >               {
> > +          if (__glibc_unlikely (!aligned_OK (p)))
> > +            malloc_printerr ("__malloc_info(): " \
> > +                             "unaligned fastbin chunk detected");
> >                 ++nthissize;
>
> Wrong indentation, extra backslash.
>
> Andreas.
>
> --
> Andreas Schwab, SUSE Labs, schwab@suse.de
> GPG Key fingerprint = 0196 BAD8 1CE9 1970 F4BE  1748 E4D4 88E3 0EEA B9D7
> "And now for something completely different."
>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-03-30  8:44               ` Eyal Itkin
@ 2020-03-30 13:56                 ` Carlos O'Donell
  2020-03-30 17:19                   ` Eyal Itkin
  2020-10-31 13:33                   ` H.J. Lu
  0 siblings, 2 replies; 33+ messages in thread
From: Carlos O'Donell @ 2020-03-30 13:56 UTC (permalink / raw)
  To: Eyal Itkin, Andreas Schwab; +Cc: Eyal Itkin via Libc-alpha

On 3/30/20 4:44 AM, Eyal Itkin via Libc-alpha wrote:
> Sorry about the indentations. The file has mixed tabs and spaces, sometimes
> even in the same code line, which makes it hard to know what's right. In
> each case I tried to mimick the behavior of the lines before / after my
> addition.

No worries, this is entirely my fault. I thought I'd fixed all of those up
during my import of your patch. That's my responsibility really. I'll correct
them and push the cleanup.

> Regarding the new issue in malloc_consolidate, I'm going over it now as it
> didn't happen on my intel x64 environment. Will update you soon about it.

Thanks!

-- 
Cheers,
Carlos.


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-03-30 13:56                 ` Carlos O'Donell
@ 2020-03-30 17:19                   ` Eyal Itkin
  2020-10-31 13:33                   ` H.J. Lu
  1 sibling, 0 replies; 33+ messages in thread
From: Eyal Itkin @ 2020-03-30 17:19 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: Andreas Schwab, Eyal Itkin via Libc-alpha

Submitted a separate patch to fix the bug which crashed on 32 bit architectures.
Also fixed the redundant '\' chars at the end of lines as requested by Andreas.

Sorry for the troubles with my patch,
Eyal.

On Mon, Mar 30, 2020 at 4:56 PM Carlos O'Donell <carlos@redhat.com> wrote:
>
> On 3/30/20 4:44 AM, Eyal Itkin via Libc-alpha wrote:
> > Sorry about the indentations. The file has mixed tabs and spaces, sometimes
> > even in the same code line, which makes it hard to know what's right. In
> > each case I tried to mimick the behavior of the lines before / after my
> > addition.
>
> No worries, this is entirely my fault. I thought I'd fixed all of those up
> during my import of your patch. That's my responsibility really. I'll correct
> them and push the cleanup.
>
> > Regarding the new issue in malloc_consolidate, I'm going over it now as it
> > didn't happen on my intel x64 environment. Will update you soon about it.
>
> Thanks!
>
> --
> Cheers,
> Carlos.
>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-03-30 13:56                 ` Carlos O'Donell
  2020-03-30 17:19                   ` Eyal Itkin
@ 2020-10-31 13:33                   ` H.J. Lu
  2020-10-31 14:49                     ` Eyal Itkin
  1 sibling, 1 reply; 33+ messages in thread
From: H.J. Lu @ 2020-10-31 13:33 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: Eyal Itkin, Andreas Schwab, Eyal Itkin via Libc-alpha

On Mon, Mar 30, 2020 at 6:56 AM Carlos O'Donell via Libc-alpha
<libc-alpha@sourceware.org> wrote:
>
> On 3/30/20 4:44 AM, Eyal Itkin via Libc-alpha wrote:
> > Sorry about the indentations. The file has mixed tabs and spaces, sometimes
> > even in the same code line, which makes it hard to know what's right. In
> > each case I tried to mimick the behavior of the lines before / after my
> > addition.
>
> No worries, this is entirely my fault. I thought I'd fixed all of those up
> during my import of your patch. That's my responsibility really. I'll correct
> them and push the cleanup.
>
> > Regarding the new issue in malloc_consolidate, I'm going over it now as it
> > didn't happen on my intel x64 environment. Will update you soon about it.
>

This caused:

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96504
https://sourceware.org/bugzilla/show_bug.cgi?id=26826

-- 
H.J.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-10-31 13:33                   ` H.J. Lu
@ 2020-10-31 14:49                     ` Eyal Itkin
  2020-10-31 15:27                       ` H.J. Lu
  0 siblings, 1 reply; 33+ messages in thread
From: Eyal Itkin @ 2020-10-31 14:49 UTC (permalink / raw)
  To: H.J. Lu; +Cc: Carlos O'Donell, Andreas Schwab, Eyal Itkin via Libc-alpha

Thanks for the notification about this issue.

Going over https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96504, it
seems that a bug was fixed in gcc between gcc-10.2.0 and gcc version
11.0.0 20201029, and I traced this bug to the following commit:
https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=f6615c213354fd3ec7fc6238e61cc26bb1830464.
Essentially, the test "co-ret-17-void-ret-coro.C" had an error that
was caught using fsanitize=address, and this is why the test does not
crash for gcc-11.0.0.

In a way, fsanitize=address catches memory errors (use-after-free),
and Safe-Linking is a mechanism that will prevent improper use of
freed heap buffers. And so, it seems that just like Safe-Linking
prompted about the faulty test  "co-ret-17-void-ret-coro.C" in
gcc-10.2.0, it also warns about the similar test: "pr95519-05-gro.C".

I went over the code of the said test, and it seems that both tests
are very similar, and that the test indeed suffers from the same
"use-after-free" in line 53 "if (!f.done())". I would recommend
testing the test using fsanitize=address, and probably also patching
it using a similar patch to the one that fixed
"co-ret-17-void-ret-coro.C":
https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=f6615c213354fd3ec7fc6238e61cc26bb1830464.

Eyal.

On Sat, Oct 31, 2020 at 3:33 PM H.J. Lu <hjl.tools@gmail.com> wrote:
>
> On Mon, Mar 30, 2020 at 6:56 AM Carlos O'Donell via Libc-alpha
> <libc-alpha@sourceware.org> wrote:
> >
> > On 3/30/20 4:44 AM, Eyal Itkin via Libc-alpha wrote:
> > > Sorry about the indentations. The file has mixed tabs and spaces, sometimes
> > > even in the same code line, which makes it hard to know what's right. In
> > > each case I tried to mimick the behavior of the lines before / after my
> > > addition.
> >
> > No worries, this is entirely my fault. I thought I'd fixed all of those up
> > during my import of your patch. That's my responsibility really. I'll correct
> > them and push the cleanup.
> >
> > > Regarding the new issue in malloc_consolidate, I'm going over it now as it
> > > didn't happen on my intel x64 environment. Will update you soon about it.
> >
>
> This caused:
>
> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96504
> https://sourceware.org/bugzilla/show_bug.cgi?id=26826
>
> --
> H.J.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-10-31 14:49                     ` Eyal Itkin
@ 2020-10-31 15:27                       ` H.J. Lu
  0 siblings, 0 replies; 33+ messages in thread
From: H.J. Lu @ 2020-10-31 15:27 UTC (permalink / raw)
  To: Eyal Itkin; +Cc: Carlos O'Donell, Andreas Schwab, Eyal Itkin via Libc-alpha

On Sat, Oct 31, 2020 at 7:49 AM Eyal Itkin <eyal.itkin@gmail.com> wrote:
>
> Thanks for the notification about this issue.
>
> Going over https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96504, it
> seems that a bug was fixed in gcc between gcc-10.2.0 and gcc version
> 11.0.0 20201029, and I traced this bug to the following commit:
> https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=f6615c213354fd3ec7fc6238e61cc26bb1830464.
> Essentially, the test "co-ret-17-void-ret-coro.C" had an error that
> was caught using fsanitize=address, and this is why the test does not
> crash for gcc-11.0.0.
>
> In a way, fsanitize=address catches memory errors (use-after-free),
> and Safe-Linking is a mechanism that will prevent improper use of
> freed heap buffers. And so, it seems that just like Safe-Linking
> prompted about the faulty test  "co-ret-17-void-ret-coro.C" in
> gcc-10.2.0, it also warns about the similar test: "pr95519-05-gro.C".
>
> I went over the code of the said test, and it seems that both tests
> are very similar, and that the test indeed suffers from the same
> "use-after-free" in line 53 "if (!f.done())". I would recommend
> testing the test using fsanitize=address, and probably also patching
> it using a similar patch to the one that fixed
> "co-ret-17-void-ret-coro.C":
> https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=f6615c213354fd3ec7fc6238e61cc26bb1830464.
>

It is a use after free bug in GCC:

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96504#c3

-- 
H.J.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-03-18 21:28         ` Carlos O'Donell
@ 2020-03-19  6:41           ` Eyal Itkin
  0 siblings, 0 replies; 33+ messages in thread
From: Eyal Itkin @ 2020-03-19  6:41 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: Carlos O'Donell, GNU C Library

Happy to hear that all the legal issues have been resolved.

What are the next steps? Have you read the white paper?
(https://github.com/gperftools/gperftools/files/4023520/Safe-Linking-White-Paper.txt)
It should contain a detailed explanation for the reasons behind this
feature, and also explain how it works.

The patch itself simply applies the single-linked-list pointer masking
(and alignment checks) to both the Fast Bins and TCache lists.

Will be happy to assist in any way I can,
Eyal.

On Wed, Mar 18, 2020 at 11:28 PM Carlos O'Donell <carlos@redhat.com> wrote:
>
> On Wed, Mar 4, 2020 at 10:30 PM Carlos O'Donell <carlos@redhat.com> wrote:
> >
> > On 3/4/20 8:19 PM, Eyal Itkin wrote:
> > > It took some time, but I signed all the forms, and just received back
> > > the mutually signed FSF form.
> >
> > This is great news!
> >
> > I have reviewed the results, and unfortunately I have a question for
> > the FSF which I hope gets resolved quickly. Once it's resolved we can
> > start reviewing the patches (looks like a typo).
>
> OK, the typo has been fixed, sorry for this.
>
> > > What is needed now in order to proceed with the patch that I've submitted?
> >
> > Once I get confirmation from the FSF about the issue then I can review the
> > patch starting with the high-level architecture.
>
> Now we can start the review!
>
> Cheers,
> Carlos.
>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-03-05  3:30       ` Carlos O'Donell
@ 2020-03-18 21:28         ` Carlos O'Donell
  2020-03-19  6:41           ` Eyal Itkin
  0 siblings, 1 reply; 33+ messages in thread
From: Carlos O'Donell @ 2020-03-18 21:28 UTC (permalink / raw)
  To: Eyal Itkin, Carlos O'Donell; +Cc: GNU C Library

On Wed, Mar 4, 2020 at 10:30 PM Carlos O'Donell <carlos@redhat.com> wrote:
>
> On 3/4/20 8:19 PM, Eyal Itkin wrote:
> > It took some time, but I signed all the forms, and just received back
> > the mutually signed FSF form.
>
> This is great news!
>
> I have reviewed the results, and unfortunately I have a question for
> the FSF which I hope gets resolved quickly. Once it's resolved we can
> start reviewing the patches (looks like a typo).

OK, the typo has been fixed, sorry for this.

> > What is needed now in order to proceed with the patch that I've submitted?
>
> Once I get confirmation from the FSF about the issue then I can review the
> patch starting with the high-level architecture.

Now we can start the review!

Cheers,
Carlos.


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-03-05  1:19     ` Eyal Itkin
@ 2020-03-05  3:30       ` Carlos O'Donell
  2020-03-18 21:28         ` Carlos O'Donell
  0 siblings, 1 reply; 33+ messages in thread
From: Carlos O'Donell @ 2020-03-05  3:30 UTC (permalink / raw)
  To: Eyal Itkin, Carlos O'Donell; +Cc: libc-alpha

On 3/4/20 8:19 PM, Eyal Itkin wrote:
> It took some time, but I signed all the forms, and just received back
> the mutually signed FSF form.

This is great news!

I have reviewed the results, and unfortunately I have a question for
the FSF which I hope gets resolved quickly. Once it's resolved we can
start reviewing the patches (looks like a typo).
 
> What is needed now in order to proceed with the patch that I've submitted?

Once I get confirmation from the FSF about the issue then I can review the
patch starting with the high-level architecture.

> Thanks again for your cooperation,

Thank you for working through the contribution process!

-- 
Cheers,
Carlos.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-02-03 19:48   ` Eyal Itkin
@ 2020-03-05  1:19     ` Eyal Itkin
  2020-03-05  3:30       ` Carlos O'Donell
  0 siblings, 1 reply; 33+ messages in thread
From: Eyal Itkin @ 2020-03-05  1:19 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: libc-alpha

It took some time, but I signed all the forms, and just received back
the mutually signed FSF form.

What is needed now in order to proceed with the patch that I've submitted?

Thanks again for your cooperation,
Eyal Itkin.


On Mon, 3 Feb 2020, 21:47 Eyal Itkin, <eyal.itkin@gmail.com> wrote:
>
> Thanks for your fast response.
> I've just sent an assignment using your supplied form.
>
> If anything else is needed I will be more than happy to assist.
>
> Eyal.
>
> On Mon, Feb 3, 2020 at 8:10 PM Carlos O'Donell <codonell@redhat.com> wrote:
> >
> > On 2/2/20 4:43 AM, Eyal Itkin wrote:
> > > Safe-Linking is a security mechanism that protects single-linked
> > > lists (such as the fastbin and tcache) from being tampered by attackers.
> > > The mechanism makes use of randomness from ASLR (mmap_base), and
> > > when combined with chunk alignment integrity checks, it protects the
> > > pointers from being hijacked by an attacker.
> > >
> > > While Safe-Unlinking protects double-linked lists (such as the small
> > > bins), there wasn't any similar protection for attacks against
> > > single-linked lists. This solution protects against 3 common attacks:
> > >   * Partial pointer override: modifies the lower bytes (Little Endian)
> > >   * Full pointer override: hijacks the pointer to an attacker's location
> > >   * Unaligned chunks: pointing the list to an unaligned address
> > >
> > > The design assumes an attacker doesn't know where the heap is located,
> > > and uses the ASLR randomness to "sign" the single-linked pointers. We
> > > mark the pointer as P and the location in which it is stored as L, and
> > > the calculation will be:
> > >   * PROTECT(P) := (L >> PAGE_SHIFT) XOR (P)
> > >   * *L = PROTECT(P)
> > >
> > > This way, the random bits from the address L (which start at the bit
> > > in the PAGE_SHIFT position), will be merged with LSB of the stored
> > > protected pointer. This protection layer prevents an attacker from
> > > modifying the pointer into a controlled value.
> > >
> > > An additional check that the chunks are MALLOC_ALIGNed adds an
> > > important layer:
> > >   * Attackers can't point to illegal (unaligned) memory addresses
> > >   * Attackers must guess correctly the alignment bits
> > >
> > > On standard 32 bit Linux machines, an attack will directly fail 7
> > > out of 8 times, and on 64 bit machines it will fail 15 out of 16
> > > times.
> > >
> > > This proposed patch was benchmarked and it's effect on the overall
> > > performance of the heap was negligible and couldn't be distinguished
> > > from the default variance between tests on the vanilla version. A
> > > similar protection was added to Chromium's version of TCMalloc
> > > in 2013, and according to their documentation it had an overhead of
> > > less than 2%.
> > >
> > > For more information, please read out White Paper which can be
> > > found here:
> > > https://github.com/gperftools/gperftools/files/4023520/Safe-Linking-White-Paper.txt
> >
> > The concept behind your patch looks really interesting, thank you for
> > working on this patch!
> >
> > One of the things we'll need from you is a copyright assignment
> > before the glibc project can accept the patches.
> >
> > Please have a look at our "Contribution Checklist"
> > https://sourceware.org/glibc/wiki/Contribution%20checklist
> >
> > "FSF Copyright Assignment"
> > https://sourceware.org/glibc/wiki/Contribution%20checklist#FSF_copyright_Assignment
> >
> > I always suggest a futures assignment so we can accept this patch
> > and all future patches you submit to glibc:
> > http://git.savannah.gnu.org/cgit/gnulib.git/plain/doc/Copyright/request-assign.future
> >
> > Thank you for your contribution!
> >
> > --
> > Cheers,
> > Carlos.
> >

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-02-03 18:10 ` Carlos O'Donell
@ 2020-02-03 19:48   ` Eyal Itkin
  2020-03-05  1:19     ` Eyal Itkin
  0 siblings, 1 reply; 33+ messages in thread
From: Eyal Itkin @ 2020-02-03 19:48 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: libc-alpha

Thanks for your fast response.
I've just sent an assignment using your supplied form.

If anything else is needed I will be more than happy to assist.

Eyal.

On Mon, Feb 3, 2020 at 8:10 PM Carlos O'Donell <codonell@redhat.com> wrote:
>
> On 2/2/20 4:43 AM, Eyal Itkin wrote:
> > Safe-Linking is a security mechanism that protects single-linked
> > lists (such as the fastbin and tcache) from being tampered by attackers.
> > The mechanism makes use of randomness from ASLR (mmap_base), and
> > when combined with chunk alignment integrity checks, it protects the
> > pointers from being hijacked by an attacker.
> >
> > While Safe-Unlinking protects double-linked lists (such as the small
> > bins), there wasn't any similar protection for attacks against
> > single-linked lists. This solution protects against 3 common attacks:
> >   * Partial pointer override: modifies the lower bytes (Little Endian)
> >   * Full pointer override: hijacks the pointer to an attacker's location
> >   * Unaligned chunks: pointing the list to an unaligned address
> >
> > The design assumes an attacker doesn't know where the heap is located,
> > and uses the ASLR randomness to "sign" the single-linked pointers. We
> > mark the pointer as P and the location in which it is stored as L, and
> > the calculation will be:
> >   * PROTECT(P) := (L >> PAGE_SHIFT) XOR (P)
> >   * *L = PROTECT(P)
> >
> > This way, the random bits from the address L (which start at the bit
> > in the PAGE_SHIFT position), will be merged with LSB of the stored
> > protected pointer. This protection layer prevents an attacker from
> > modifying the pointer into a controlled value.
> >
> > An additional check that the chunks are MALLOC_ALIGNed adds an
> > important layer:
> >   * Attackers can't point to illegal (unaligned) memory addresses
> >   * Attackers must guess correctly the alignment bits
> >
> > On standard 32 bit Linux machines, an attack will directly fail 7
> > out of 8 times, and on 64 bit machines it will fail 15 out of 16
> > times.
> >
> > This proposed patch was benchmarked and it's effect on the overall
> > performance of the heap was negligible and couldn't be distinguished
> > from the default variance between tests on the vanilla version. A
> > similar protection was added to Chromium's version of TCMalloc
> > in 2013, and according to their documentation it had an overhead of
> > less than 2%.
> >
> > For more information, please read out White Paper which can be
> > found here:
> > https://github.com/gperftools/gperftools/files/4023520/Safe-Linking-White-Paper.txt
>
> The concept behind your patch looks really interesting, thank you for
> working on this patch!
>
> One of the things we'll need from you is a copyright assignment
> before the glibc project can accept the patches.
>
> Please have a look at our "Contribution Checklist"
> https://sourceware.org/glibc/wiki/Contribution%20checklist
>
> "FSF Copyright Assignment"
> https://sourceware.org/glibc/wiki/Contribution%20checklist#FSF_copyright_Assignment
>
> I always suggest a futures assignment so we can accept this patch
> and all future patches you submit to glibc:
> http://git.savannah.gnu.org/cgit/gnulib.git/plain/doc/Copyright/request-assign.future
>
> Thank you for your contribution!
>
> --
> Cheers,
> Carlos.
>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH] Add Safe-Linking to fastbins and tcache
  2020-02-02  9:43 Eyal Itkin
@ 2020-02-03 18:10 ` Carlos O'Donell
  2020-02-03 19:48   ` Eyal Itkin
  0 siblings, 1 reply; 33+ messages in thread
From: Carlos O'Donell @ 2020-02-03 18:10 UTC (permalink / raw)
  To: Eyal Itkin, libc-alpha

On 2/2/20 4:43 AM, Eyal Itkin wrote:
> Safe-Linking is a security mechanism that protects single-linked
> lists (such as the fastbin and tcache) from being tampered by attackers.
> The mechanism makes use of randomness from ASLR (mmap_base), and
> when combined with chunk alignment integrity checks, it protects the
> pointers from being hijacked by an attacker.
> 
> While Safe-Unlinking protects double-linked lists (such as the small
> bins), there wasn't any similar protection for attacks against
> single-linked lists. This solution protects against 3 common attacks:
>   * Partial pointer override: modifies the lower bytes (Little Endian)
>   * Full pointer override: hijacks the pointer to an attacker's location
>   * Unaligned chunks: pointing the list to an unaligned address
> 
> The design assumes an attacker doesn't know where the heap is located,
> and uses the ASLR randomness to "sign" the single-linked pointers. We
> mark the pointer as P and the location in which it is stored as L, and
> the calculation will be:
>   * PROTECT(P) := (L >> PAGE_SHIFT) XOR (P)
>   * *L = PROTECT(P)
> 
> This way, the random bits from the address L (which start at the bit
> in the PAGE_SHIFT position), will be merged with LSB of the stored
> protected pointer. This protection layer prevents an attacker from
> modifying the pointer into a controlled value.
> 
> An additional check that the chunks are MALLOC_ALIGNed adds an
> important layer:
>   * Attackers can't point to illegal (unaligned) memory addresses
>   * Attackers must guess correctly the alignment bits
> 
> On standard 32 bit Linux machines, an attack will directly fail 7
> out of 8 times, and on 64 bit machines it will fail 15 out of 16
> times.
> 
> This proposed patch was benchmarked and it's effect on the overall
> performance of the heap was negligible and couldn't be distinguished
> from the default variance between tests on the vanilla version. A
> similar protection was added to Chromium's version of TCMalloc
> in 2013, and according to their documentation it had an overhead of
> less than 2%.
> 
> For more information, please read out White Paper which can be
> found here:
> https://github.com/gperftools/gperftools/files/4023520/Safe-Linking-White-Paper.txt

The concept behind your patch looks really interesting, thank you for
working on this patch!

One of the things we'll need from you is a copyright assignment
before the glibc project can accept the patches.

Please have a look at our "Contribution Checklist"
https://sourceware.org/glibc/wiki/Contribution%20checklist

"FSF Copyright Assignment"
https://sourceware.org/glibc/wiki/Contribution%20checklist#FSF_copyright_Assignment

I always suggest a futures assignment so we can accept this patch
and all future patches you submit to glibc:
http://git.savannah.gnu.org/cgit/gnulib.git/plain/doc/Copyright/request-assign.future

Thank you for your contribution!

-- 
Cheers,
Carlos.

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH] Add Safe-Linking to fastbins and tcache
@ 2020-02-02  9:43 Eyal Itkin
  2020-02-03 18:10 ` Carlos O'Donell
  0 siblings, 1 reply; 33+ messages in thread
From: Eyal Itkin @ 2020-02-02  9:43 UTC (permalink / raw)
  To: libc-alpha

Safe-Linking is a security mechanism that protects single-linked
lists (such as the fastbin and tcache) from being tampered by attackers.
The mechanism makes use of randomness from ASLR (mmap_base), and
when combined with chunk alignment integrity checks, it protects the
pointers from being hijacked by an attacker.

While Safe-Unlinking protects double-linked lists (such as the small
bins), there wasn't any similar protection for attacks against
single-linked lists. This solution protects against 3 common attacks:
  * Partial pointer override: modifies the lower bytes (Little Endian)
  * Full pointer override: hijacks the pointer to an attacker's location
  * Unaligned chunks: pointing the list to an unaligned address

The design assumes an attacker doesn't know where the heap is located,
and uses the ASLR randomness to "sign" the single-linked pointers. We
mark the pointer as P and the location in which it is stored as L, and
the calculation will be:
  * PROTECT(P) := (L >> PAGE_SHIFT) XOR (P)
  * *L = PROTECT(P)

This way, the random bits from the address L (which start at the bit
in the PAGE_SHIFT position), will be merged with LSB of the stored
protected pointer. This protection layer prevents an attacker from
modifying the pointer into a controlled value.

An additional check that the chunks are MALLOC_ALIGNed adds an
important layer:
  * Attackers can't point to illegal (unaligned) memory addresses
  * Attackers must guess correctly the alignment bits

On standard 32 bit Linux machines, an attack will directly fail 7
out of 8 times, and on 64 bit machines it will fail 15 out of 16
times.

This proposed patch was benchmarked and it's effect on the overall
performance of the heap was negligible and couldn't be distinguished
from the default variance between tests on the vanilla version. A
similar protection was added to Chromium's version of TCMalloc
in 2013, and according to their documentation it had an overhead of
less than 2%.

For more information, please read out White Paper which can be
found here:
https://github.com/gperftools/gperftools/files/4023520/Safe-Linking-White-Paper.txt

---
 malloc/malloc.c | 61 ++++++++++++++++++++++++++++++++++++++-----------
 1 file changed, 48 insertions(+), 13 deletions(-)

diff --git a/malloc/malloc.c b/malloc/malloc.c
index 7d7d30bb13..25a745df9a 100644
--- a/malloc/malloc.c
+++ b/malloc/malloc.c
@@ -327,6 +327,15 @@ __malloc_assert (const char *assertion, const
char *file, unsigned int line,
 # define MAX_TCACHE_COUNT UINT16_MAX
 #endif

+/*
+  Safe-Linking:
+  Use randomness from ASLR (mmap_base) to protect single-linked lists
+  of Fast-Bins and TCache. Together with allocation alignment checks, this
+  mechanism reduces the risk of pointer hijacking, as was done with
+  Safe-Unlinking in the double-linked lists of Small-Bins.
+*/
+#define PROTECT_PTR(pos, ptr, type)     ((type)((((size_t)pos) >>
PAGE_SHIFT) ^ ((size_t)ptr)))
+#define REVEAL_PTR(pos, ptr, type)      PROTECT_PTR(pos, ptr, type)

 /*
   REALLOC_ZERO_BYTES_FREES should be set if a call to
@@ -2157,12 +2166,14 @@ do_check_malloc_state (mstate av)

       while (p != 0)
         {
+          if (__glibc_unlikely(!aligned_OK(p)))
+            malloc_printerr ("do_check_malloc_state(): un-aligned
fastbin chunk detected");
           /* each chunk claims to be inuse */
           do_check_inuse_chunk (av, p);
           total += chunksize (p);
           /* chunk belongs in this bin */
           assert (fastbin_index (chunksize (p)) == i);
-          p = p->fd;
+          p = REVEAL_PTR(&p->fd, p->fd, mchunkptr);
         }
     }

@@ -2923,7 +2934,7 @@ tcache_put (mchunkptr chunk, size_t tc_idx)
      detect a double free.  */
   e->key = tcache;

-  e->next = tcache->entries[tc_idx];
+  e->next = PROTECT_PTR(&e->next, tcache->entries[tc_idx], tcache_entry *);
   tcache->entries[tc_idx] = e;
   ++(tcache->counts[tc_idx]);
 }
@@ -2934,9 +2945,11 @@ static __always_inline void *
 tcache_get (size_t tc_idx)
 {
   tcache_entry *e = tcache->entries[tc_idx];
-  tcache->entries[tc_idx] = e->next;
+  tcache->entries[tc_idx] = REVEAL_PTR(&e->next, e->next, tcache_entry *);
   --(tcache->counts[tc_idx]);
   e->key = NULL;
+  if (__glibc_unlikely(!aligned_OK(e)))
+    malloc_printerr ("malloc(): un-aligned tcache chunk detected");
   return (void *) e;
 }

@@ -2960,7 +2973,9 @@ tcache_thread_shutdown (void)
       while (tcache_tmp->entries[i])
  {
    tcache_entry *e = tcache_tmp->entries[i];
-   tcache_tmp->entries[i] = e->next;
+      if (__glibc_unlikely(!aligned_OK(e)))
+        malloc_printerr ("tcache_thread_shutdown(): un-aligned tcache
chunk detected");
+   tcache_tmp->entries[i] = REVEAL_PTR(&e->next, e->next, tcache_entry *);
    __libc_free (e);
  }
     }
@@ -3570,8 +3585,11 @@ _int_malloc (mstate av, size_t bytes)
       victim = pp; \
       if (victim == NULL) \
  break; \
+      pp = REVEAL_PTR(&victim->fd, victim->fd, mfastbinptr);             \
+      if (__glibc_unlikely(!aligned_OK(pp)))                             \
+        malloc_printerr ("malloc(): un-aligned fastbin chunk detected"); \
     } \
-  while ((pp = catomic_compare_and_exchange_val_acq (fb, victim->fd, victim)) \
+  while ((pp = catomic_compare_and_exchange_val_acq (fb, pp, victim)) \
  != victim); \

   if ((unsigned long) (nb) <= (unsigned long) (get_max_fast ()))
@@ -3583,8 +3601,11 @@ _int_malloc (mstate av, size_t bytes)

       if (victim != NULL)
  {
+      if (__glibc_unlikely(!aligned_OK(victim)))
+        malloc_printerr ("malloc(): un-aligned fastbin chunk detected");
+
    if (SINGLE_THREAD_P)
-     *fb = victim->fd;
+     *fb = REVEAL_PTR(&victim->fd, victim->fd, mfastbinptr);
    else
      REMOVE_FB (fb, pp, victim);
    if (__glibc_likely (victim != NULL))
@@ -3605,8 +3626,10 @@ _int_malloc (mstate av, size_t bytes)
    while (tcache->counts[tc_idx] < mp_.tcache_count
  && (tc_victim = *fb) != NULL)
      {
+       if (__glibc_unlikely(!aligned_OK(tc_victim)))
+         malloc_printerr ("malloc(): un-aligned fastbin chunk detected");
        if (SINGLE_THREAD_P)
- *fb = tc_victim->fd;
+ *fb = REVEAL_PTR(&tc_victim->fd, tc_victim->fd, mfastbinptr);
        else
  {
    REMOVE_FB (fb, pp, tc_victim);
@@ -4196,11 +4219,15 @@ _int_free (mstate av, mchunkptr p, int have_lock)
      LIBC_PROBE (memory_tcache_double_free, 2, e, tc_idx);
      for (tmp = tcache->entries[tc_idx];
  tmp;
- tmp = tmp->next)
+ tmp = REVEAL_PTR(&tmp->next, tmp->next, tcache_entry *))
+        {
+       if (__glibc_unlikely(!aligned_OK(tmp)))
+ malloc_printerr ("free(): un-aligned chunk detected in tcache 2");
        if (tmp == e)
  malloc_printerr ("free(): double free detected in tcache 2");
      /* If we get here, it was a coincidence.  We've wasted a
         few cycles, but don't abort.  */
+        }
    }

  if (tcache->counts[tc_idx] < mp_.tcache_count)
@@ -4264,7 +4291,7 @@ _int_free (mstate av, mchunkptr p, int have_lock)
     add (i.e., double free).  */
  if (__builtin_expect (old == p, 0))
    malloc_printerr ("double free or corruption (fasttop)");
- p->fd = old;
+ p->fd = PROTECT_PTR(&p->fd, old, mchunkptr);
  *fb = p;
       }
     else
@@ -4274,7 +4301,8 @@ _int_free (mstate av, mchunkptr p, int have_lock)
       add (i.e., double free).  */
    if (__builtin_expect (old == p, 0))
      malloc_printerr ("double free or corruption (fasttop)");
-   p->fd = old2 = old;
+   old2 = old;
+   p->fd = PROTECT_PTR(&p->fd, old, mchunkptr);
  }
       while ((old = catomic_compare_and_exchange_val_rel (fb, p, old2))
       != old2);
@@ -4472,13 +4500,16 @@ static void malloc_consolidate(mstate av)
     if (p != 0) {
       do {
  {
+   if (__glibc_unlikely(!aligned_OK(p)))
+     malloc_printerr ("malloc_consolidate(): un-aligned fastbin chunk
detected");
+
    unsigned int idx = fastbin_index (chunksize (p));
    if ((&fastbin (av, idx)) != fb)
      malloc_printerr ("malloc_consolidate(): invalid chunk size");
  }

  check_inuse_chunk(av, p);
- nextp = p->fd;
+ nextp = REVEAL_PTR(&p->fd, p->fd, mchunkptr);

  /* Slightly streamlined version of consolidation code in free() */
  size = chunksize (p);
@@ -4896,8 +4927,10 @@ int_mallinfo (mstate av, struct mallinfo *m)

   for (i = 0; i < NFASTBINS; ++i)
     {
-      for (p = fastbin (av, i); p != 0; p = p->fd)
+      for (p = fastbin (av, i); p != 0; p = REVEAL_PTR(&p->fd, p->fd,
mchunkptr))
         {
+          if (__glibc_unlikely(!aligned_OK(p)))
+            malloc_printerr ("int_mallinfo(): un-aligned fastbin
chunk detected");
           ++nfastblocks;
           fastavail += chunksize (p);
         }
@@ -5437,8 +5470,10 @@ __malloc_info (int options, FILE *fp)

        while (p != NULL)
  {
+          if (__glibc_unlikely(!aligned_OK(p)))
+            malloc_printerr ("__malloc_info(): un-aligned fastbin
chunk detected");
    ++nthissize;
-   p = p->fd;
+   p = REVEAL_PTR(&p->fd, p->fd, mchunkptr);
  }

        fastavail += nthissize * thissize;
-- 
2.17.1

^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2020-10-31 15:28 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-19 13:32 [PATCH] Add Safe-Linking to fastbins and tcache Wilco Dijkstra
2020-03-19 15:09 ` Eyal Itkin
2020-03-19 18:15   ` Eyal Itkin
2020-03-20  0:10     ` Wilco Dijkstra
2020-03-20  2:49       ` Carlos O'Donell
2020-03-20  2:53     ` DJ Delorie
2020-03-20  9:35       ` Eyal Itkin
2020-03-20 18:00         ` Adhemerval Zanella
2020-03-20 19:45           ` Eyal Itkin
2020-03-20 23:58             ` DJ Delorie
2020-03-21  2:47             ` Carlos O'Donell
2020-03-23 13:52               ` Adhemerval Zanella
2020-03-23 16:11                 ` DJ Delorie
2020-03-23 16:23                   ` Carlos O'Donell
2020-03-23 16:26                   ` Adhemerval Zanella
2020-03-24 13:44                   ` Wilco Dijkstra
2020-03-29 17:04               ` Carlos O'Donell
2020-03-30  6:30                 ` Eyal Itkin
2020-03-30  7:56                 ` Andreas Schwab
2020-03-30  8:01             ` Andreas Schwab
2020-03-30  8:44               ` Eyal Itkin
2020-03-30 13:56                 ` Carlos O'Donell
2020-03-30 17:19                   ` Eyal Itkin
2020-10-31 13:33                   ` H.J. Lu
2020-10-31 14:49                     ` Eyal Itkin
2020-10-31 15:27                       ` H.J. Lu
  -- strict thread matches above, loose matches on Subject: below --
2020-02-02  9:43 Eyal Itkin
2020-02-03 18:10 ` Carlos O'Donell
2020-02-03 19:48   ` Eyal Itkin
2020-03-05  1:19     ` Eyal Itkin
2020-03-05  3:30       ` Carlos O'Donell
2020-03-18 21:28         ` Carlos O'Donell
2020-03-19  6:41           ` Eyal Itkin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).