From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from us-smtp-delivery-74.mimecast.com (us-smtp-delivery-74.mimecast.com [216.205.24.74]) by sourceware.org (Postfix) with ESMTP id 0009A3948451 for ; Sat, 21 Mar 2020 02:47:12 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 0009A3948451 Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com [209.85.219.71]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-183-VIEwTP_dNeC35Ur1EMtATA-1; Fri, 20 Mar 2020 22:47:09 -0400 X-MC-Unique: VIEwTP_dNeC35Ur1EMtATA-1 Received: by mail-qv1-f71.google.com with SMTP id ee5so7526655qvb.23 for ; Fri, 20 Mar 2020 19:47:09 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:organization :message-id:date:user-agent:mime-version:in-reply-to :content-language:content-transfer-encoding; bh=jNqvMRunKHQKF5ksXXPvBDUcVF+eFAii/WSsqaTtN04=; b=Hcykjd3RiTIIvNFjZi3YMAvmMSaHeSqNe6fQvIniZ/G8a6zJl7+MtSzGqbdpvgAwRm JLb0vlBCV7X9nNK4yolLNHkpkZSajkEVqTo1SbMyDZ7eJPuOtuhbPFiotciNHvvFvTrA uUpS6TRILP+lE6Ye8hnKpaQJ5Hy8vaSlsxv0CROKjxUm2yUR91AYmvs+eBowuRcjBdw5 r7qstb8hrsbWaypiVX5S2jMMjSw3EPF0pAawk0vjLhLHmKY6FbpCRzIKeC99ZOUMKJt/ DF8BPk2FFiSb82GJHZpM8tfjTYPUtyktllgkbxdH9snpwf17eyYZmpxYLTtajwLz2aq2 /Jlw== X-Gm-Message-State: ANhLgQ1pn1tYJTraYzL65QUdlkO1IYzpv4VM7w8Bvz3cnoNILyQhwYrM 0cWEUmpmC6GzmonqMPFPmaD8yRUJ5fTMfPDA37JkbiowEFPkfH31FWG1KjEJ8BAsrZ5aCWxGOg4 O1ee9gvndPs1PiPmYDQjg X-Received: by 2002:ac8:3488:: with SMTP id w8mr11373821qtb.187.1584758828747; Fri, 20 Mar 2020 19:47:08 -0700 (PDT) X-Google-Smtp-Source: ADFU+vv7t7wS7pn0ElJCcD4BfbjdZmWp2Zdhlpx1VhVeqAyumhIwZlAGm97VBhK0aPvurRY4E64zhw== X-Received: by 2002:ac8:3488:: with SMTP id w8mr11373800qtb.187.1584758828317; Fri, 20 Mar 2020 19:47:08 -0700 (PDT) Received: from [192.168.1.4] (198-84-170-103.cpe.teksavvy.com. [198.84.170.103]) by smtp.gmail.com with ESMTPSA id l18sm5648938qke.132.2020.03.20.19.47.06 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 20 Mar 2020 19:47:07 -0700 (PDT) Subject: Re: [PATCH] Add Safe-Linking to fastbins and tcache To: Eyal Itkin , Adhemerval Zanella , DJ Delorie , Wilco Dijkstra , Florian Weimer Cc: GNU C Library References: From: Carlos O'Donell Organization: Red Hat Message-ID: <2dad1998-ea25-20b8-195b-581f98b0a4f4@redhat.com> Date: Fri, 20 Mar 2020 22:47:05 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.5.0 MIME-Version: 1.0 In-Reply-To: Content-Language: en-US X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-27.1 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 21 Mar 2020 02:47:15 -0000 On 3/20/20 3:45 PM, Eyal Itkin via Libc-alpha wrote: > I really am sorry to send the patch as an attachment again, however I > had too many difficulties in using git's send-email and convincing him > to send it as a proper reply to this ongoing thread. I wish Gmail's > client won't auto-wrap the lines and break the patch, but sadly they > refuse to implement this feature for over a decade now. > > Regarding the previous feedback, I implemented all of the comments, > aside from transforming the macros to static inline functions. > > I did try to switch to static functions, but this meant that I had to > change back the signature of REVEAL_PTR to once again receive two > arguments for "pos" and "ptr", as using &ptr could no longer work. > Since this change would have contradicted the previous comment about > the readability of the code, I undid the change and stayed with macros > in sake of readability. Using macros also have the benefit of using > correctly typed objects instead of generic void* types in the api of > the functions. Sometimes you will get conflicting advice from reviewers. We are not a homogeneous set of individuals ;-) I like that we might extend Safe-Linking to all lists, including the exit_function_list which is a prime target of attack since it has a singly linked list and could be written to in order to execute arbitrary function pointers (requires also the probability of guessing PTR_MANGLE results). Unlike malloc, in other less performance sensitive functions we can use bits of randomness from a global variable. Reviews so far: - Wilco Dijkstra (Arm) - DJ Delorie (Red Hat) - Adhemerval Zanella (Linaro) - Carlos O'Donell (Red Hat) If Adhemerval, DJ, and Wilco want to give their Reviewed-by, then I'll add those too. I see one remaining issue to discuss and it is the emacs dumped heap state. The rest looks good to me too. I'm TO'ing Florian here in case he wants to comment. A few years ago (time flies! 2016!) Florian had an idea to protect the chunk metadata with xor and some randomness (size and prevsize): https://sourceware.org/legacy-ml/libc-alpha/2016-10/msg00531.html So the emacs dumped heap issue is that emacs would previously use malloc_get_state to dump the heap, and the restore it again with malloc_set_State. This was a serious impediment to improving the malloc implementation and so we stopped supporting malloc_get_state, but there are still emacs binaries that use malloc_set_state to restore a dumped heap. The way we restore a dumped heap means we need to be careful we don't restore anything that we might expect should have been been protected with Safe-Linking. I have reviewed the code in hooks.c, particularly malloc_set_state and we iterate via chunk sizes so nothing is looking at the forward pointer, and even better we make them all into fake mmap'd chunks and never free them. Therefore I think we should be fine with the dumped emacs state. If nobody objects I'll push this next week after giving people some more time to review. Lastly, do you think it's possible to write a test case that exercise the failure for tcache and fastbins? You can see in malloc/tst-dynarray-at-fail.c an example where the test runs in a subprocess and it's stderr is compared to see if it triggers the expected failure. It would be good to know the code we expect to protect the chunk is functioning correctly. The test case an be reviewed as a distinct commit if we want to keep things moving. Reviewed-by: Carlos O'Donell > From 73210d07ae44920b6645cf05f821069a4bfcda00 Mon Sep 17 00:00:00 2001 > From: Eyal Itkin > Date: Fri, 20 Mar 2020 21:19:17 +0200 > Subject: [PATCH] Add Safe-Linking to fastbins and tcache > > Safe-Linking is a security mechanism that protects single-linked > lists (such as the fastbin and tcache) from being tampered by attackers. > The mechanism makes use of randomness from ASLR (mmap_base), and when > combined with chunk alignment integrity checks, it protects the "next" > pointers from being hijacked by an attacker. > > While Safe-Unlinking protects double-linked lists (such as the small > bins), there wasn't any similar protection for attacks against > single-linked lists. This solution protects against 3 common attacks: > * Partial pointer override: modifies the lower bytes (Little Endian) > * Full pointer override: hijacks the pointer to an attacker's location > * Unaligned chunks: pointing the list to an unaligned address > > The design assumes an attacker doesn't know where the heap is located, > and uses the ASLR randomness to "sign" the single-linked pointers. We > mark the pointer as P and the location in which it is stored as L, and > the calculation will be: > * PROTECT(P) := (L >> PAGE_SHIFT) XOR (P) > * *L = PROTECT(P) > > This way, the random bits from the address L (which start at the bit > in the PAGE_SHIFT position), will be merged with LSB of the stored > protected pointer. This protection layer prevents an attacker from > modifying the pointer into a controlled value. > > An additional check that the chunks are MALLOC_ALIGNed adds an > important layer: > * Attackers can't point to illegal (unaligned) memory addresses > * Attackers must guess correctly the alignment bits > > On standard 32 bit Linux machines, an attack will directly fail 7 > out of 8 times, and on 64 bit machines it will fail 15 out of 16 > times. > > This proposed patch was benchmarked and it's effect on the overall > performance of the heap was negligible and couldn't be distinguished > from the default variance between tests on the vanilla version. A > similar protection was added to Chromium's version of TCMalloc > in 2012, and according to their documentation it had an overhead of > less than 2%. > --- > malloc/malloc.c | 73 ++++++++++++++++++++++++++++++++++++++++--------- > 1 file changed, 60 insertions(+), 13 deletions(-) > > diff --git a/malloc/malloc.c b/malloc/malloc.c > index 7d7d30bb13..2e25a85029 100644 > --- a/malloc/malloc.c > +++ b/malloc/malloc.c > @@ -327,6 +327,20 @@ __malloc_assert (const char *assertion, const char *file, unsigned int line, > # define MAX_TCACHE_COUNT UINT16_MAX > #endif > > +/* > + Safe-Linking: > + Use randomness from ASLR (mmap_base) to protect single-linked lists > + of Fast-Bins and TCache. That is, mask the "next" pointers of the > + lists' chunks, and also perform allocation alignment checks on them. > + This mechanism reduces the risk of pointer hijacking, as was done with > + Safe-Unlinking in the double-linked lists of Small-Bins. > + It assumes a minimum page size of 4096 bytes (12 bits). Systems with OK. Comment explains the magic number 12 below. > + larger pages provide less entropy, although the pointer mangling > + still works. > +*/ This should be: /* Safe-Linking: ... */ To follow comment style for the project (not the style of malloc's older code). I've fixed this up. I also fixed up some tab vs. spaces. > +#define PROTECT_PTR(pos, ptr) \ > + ((__typeof (ptr)) ((((size_t) pos) >> 12) ^ ((size_t) ptr))) > +#define REVEAL_PTR(ptr) PROTECT_PTR (&ptr, ptr) > > /* > REALLOC_ZERO_BYTES_FREES should be set if a call to > @@ -2157,12 +2171,15 @@ do_check_malloc_state (mstate av) > > while (p != 0) > { > + if (__glibc_unlikely (!aligned_OK (p))) > + malloc_printerr ("do_check_malloc_state(): " \ > + "unaligned fastbin chunk detected"); OK. > /* each chunk claims to be inuse */ > do_check_inuse_chunk (av, p); > total += chunksize (p); > /* chunk belongs in this bin */ > assert (fastbin_index (chunksize (p)) == i); > - p = p->fd; > + p = REVEAL_PTR (p->fd); OK. > } > } > > @@ -2923,7 +2940,7 @@ tcache_put (mchunkptr chunk, size_t tc_idx) > detect a double free. */ > e->key = tcache; > > - e->next = tcache->entries[tc_idx]; > + e->next = PROTECT_PTR (&e->next, tcache->entries[tc_idx]); OK. > tcache->entries[tc_idx] = e; > ++(tcache->counts[tc_idx]); > } > @@ -2934,9 +2951,11 @@ static __always_inline void * > tcache_get (size_t tc_idx) > { > tcache_entry *e = tcache->entries[tc_idx]; > - tcache->entries[tc_idx] = e->next; > + tcache->entries[tc_idx] = REVEAL_PTR (e->next); > --(tcache->counts[tc_idx]); > e->key = NULL; > + if (__glibc_unlikely (!aligned_OK (e))) > + malloc_printerr ("malloc(): unaligned tcache chunk detected"); OK. > return (void *) e; > } > > @@ -2960,7 +2979,10 @@ tcache_thread_shutdown (void) > while (tcache_tmp->entries[i]) > { > tcache_entry *e = tcache_tmp->entries[i]; > - tcache_tmp->entries[i] = e->next; > + if (__glibc_unlikely (!aligned_OK (e))) > + malloc_printerr ("tcache_thread_shutdown(): " \ > + "unaligned tcache chunk detected"); > + tcache_tmp->entries[i] = REVEAL_PTR (e->next); OK. > __libc_free (e); > } > } > @@ -3570,8 +3592,11 @@ _int_malloc (mstate av, size_t bytes) > victim = pp; \ > if (victim == NULL) \ > break; \ > + pp = REVEAL_PTR (victim->fd); \ > + if (__glibc_unlikely (!aligned_OK (pp))) \ > + malloc_printerr ("malloc(): unaligned fastbin chunk detected"); \ OK. > } \ > - while ((pp = catomic_compare_and_exchange_val_acq (fb, victim->fd, victim)) \ > + while ((pp = catomic_compare_and_exchange_val_acq (fb, pp, victim)) \ OK. > != victim); \ > > if ((unsigned long) (nb) <= (unsigned long) (get_max_fast ())) > @@ -3583,8 +3608,11 @@ _int_malloc (mstate av, size_t bytes) > > if (victim != NULL) > { > + if (__glibc_unlikely (!aligned_OK (victim))) > + malloc_printerr ("malloc(): unaligned fastbin chunk detected"); > + > if (SINGLE_THREAD_P) > - *fb = victim->fd; > + *fb = REVEAL_PTR (victim->fd); OK. > else > REMOVE_FB (fb, pp, victim); > if (__glibc_likely (victim != NULL)) > @@ -3605,8 +3633,10 @@ _int_malloc (mstate av, size_t bytes) > while (tcache->counts[tc_idx] < mp_.tcache_count > && (tc_victim = *fb) != NULL) > { > + if (__glibc_unlikely (!aligned_OK (tc_victim))) > + malloc_printerr ("malloc(): unaligned fastbin chunk detected"); > if (SINGLE_THREAD_P) > - *fb = tc_victim->fd; > + *fb = REVEAL_PTR (tc_victim->fd); OK. > else > { > REMOVE_FB (fb, pp, tc_victim); > @@ -4196,11 +4226,15 @@ _int_free (mstate av, mchunkptr p, int have_lock) > LIBC_PROBE (memory_tcache_double_free, 2, e, tc_idx); > for (tmp = tcache->entries[tc_idx]; > tmp; > - tmp = tmp->next) > + tmp = REVEAL_PTR (tmp->next)) > + { > + if (__glibc_unlikely (!aligned_OK (tmp))) > + malloc_printerr ("free(): unaligned chunk detected in tcache 2"); OK. > if (tmp == e) > malloc_printerr ("free(): double free detected in tcache 2"); > /* If we get here, it was a coincidence. We've wasted a > few cycles, but don't abort. */ > + } OK. > } > > if (tcache->counts[tc_idx] < mp_.tcache_count) > @@ -4264,7 +4298,7 @@ _int_free (mstate av, mchunkptr p, int have_lock) > add (i.e., double free). */ > if (__builtin_expect (old == p, 0)) > malloc_printerr ("double free or corruption (fasttop)"); > - p->fd = old; > + p->fd = PROTECT_PTR (&p->fd, old); OK. > *fb = p; > } > else > @@ -4274,7 +4308,8 @@ _int_free (mstate av, mchunkptr p, int have_lock) > add (i.e., double free). */ > if (__builtin_expect (old == p, 0)) > malloc_printerr ("double free or corruption (fasttop)"); > - p->fd = old2 = old; > + old2 = old; > + p->fd = PROTECT_PTR (&p->fd, old); OK. > } > while ((old = catomic_compare_and_exchange_val_rel (fb, p, old2)) > != old2); > @@ -4472,13 +4507,17 @@ static void malloc_consolidate(mstate av) > if (p != 0) { > do { > { > + if (__glibc_unlikely (!aligned_OK (p))) > + malloc_printerr ("malloc_consolidate(): " \ > + "unaligned fastbin chunk detected"); OK. > + > unsigned int idx = fastbin_index (chunksize (p)); > if ((&fastbin (av, idx)) != fb) > malloc_printerr ("malloc_consolidate(): invalid chunk size"); > } > > check_inuse_chunk(av, p); > - nextp = p->fd; > + nextp = REVEAL_PTR (p->fd); OK. > > /* Slightly streamlined version of consolidation code in free() */ > size = chunksize (p); > @@ -4896,8 +4935,13 @@ int_mallinfo (mstate av, struct mallinfo *m) > > for (i = 0; i < NFASTBINS; ++i) > { > - for (p = fastbin (av, i); p != 0; p = p->fd) > + for (p = fastbin (av, i); > + p != 0; > + p = REVEAL_PTR (p->fd)) > { > + if (__glibc_unlikely (!aligned_OK (p))) > + malloc_printerr ("int_mallinfo(): " \ > + "unaligned fastbin chunk detected"); OK. > ++nfastblocks; > fastavail += chunksize (p); > } > @@ -5437,8 +5481,11 @@ __malloc_info (int options, FILE *fp) > > while (p != NULL) > { > + if (__glibc_unlikely (!aligned_OK (p))) > + malloc_printerr ("__malloc_info(): " \ > + "unaligned fastbin chunk detected"); OK. > ++nthissize; > - p = p->fd; > + p = REVEAL_PTR (p->fd); OK. > } > > fastavail += nthissize * thissize; > -- > 2.17.1 > -- Cheers, Carlos.