From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from bumble.birch.relay.mailchannels.net (bumble.birch.relay.mailchannels.net [23.83.209.25]) by sourceware.org (Postfix) with ESMTPS id 1F0913853C2B for ; Thu, 19 Aug 2021 00:47:37 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 1F0913853C2B X-Sender-Id: dreamhost|x-authsender|siddhesh@gotplt.org Received: from relay.mailchannels.net (localhost [127.0.0.1]) by relay.mailchannels.net (Postfix) with ESMTP id F19B93423C0; Thu, 19 Aug 2021 00:47:36 +0000 (UTC) Received: from pdx1-sub0-mail-a2.g.dreamhost.com (100-96-99-6.trex-nlb.outbound.svc.cluster.local [100.96.99.6]) (Authenticated sender: dreamhost) by relay.mailchannels.net (Postfix) with ESMTPA id 1D2D6341E15; Thu, 19 Aug 2021 00:47:36 +0000 (UTC) X-Sender-Id: dreamhost|x-authsender|siddhesh@gotplt.org Received: from pdx1-sub0-mail-a2.g.dreamhost.com (pop.dreamhost.com [64.90.62.162]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384) by 100.96.99.6 (trex/6.3.3); Thu, 19 Aug 2021 00:47:36 +0000 X-MC-Relay: Neutral X-MailChannels-SenderId: dreamhost|x-authsender|siddhesh@gotplt.org X-MailChannels-Auth-Id: dreamhost X-Sponge-Print: 0a3e3686105f962d_1629334056400_677594468 X-MC-Loop-Signature: 1629334056400:2354565966 X-MC-Ingress-Time: 1629334056400 Received: from pdx1-sub0-mail-a2.g.dreamhost.com (localhost [127.0.0.1]) by pdx1-sub0-mail-a2.g.dreamhost.com (Postfix) with ESMTP id C6E938674C; Wed, 18 Aug 2021 17:47:35 -0700 (PDT) Received: from [192.168.1.165] (unknown [1.186.101.110]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) (Authenticated sender: siddhesh@gotplt.org) by pdx1-sub0-mail-a2.g.dreamhost.com (Postfix) with ESMTPSA id 9131D858A2; Wed, 18 Aug 2021 17:47:31 -0700 (PDT) Subject: Re: [PATCH v2 3/4] malloc: Move mmap logic to its own function To: Adhemerval Zanella , libc-alpha@sourceware.org Cc: Norbert Manthey , Guillaume Morin References: <20210818142000.128752-1-adhemerval.zanella@linaro.org> <20210818142000.128752-4-adhemerval.zanella@linaro.org> X-DH-BACKEND: pdx1-sub0-mail-a2 From: Siddhesh Poyarekar Message-ID: Date: Thu, 19 Aug 2021 06:17:26 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: <20210818142000.128752-4-adhemerval.zanella@linaro.org> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-3494.8 required=5.0 tests=BAYES_00, GIT_PATCH_0, JMQ_SPF_NEUTRAL, KAM_DMARC_NONE, KAM_DMARC_STATUS, NICE_REPLY_A, RCVD_IN_BARRACUDACENTRAL, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_NEUTRAL, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 19 Aug 2021 00:47:39 -0000 On 8/18/21 7:49 PM, Adhemerval Zanella via Libc-alpha wrote: > So it can be used with different pagesize and flags. > --- > malloc/malloc.c | 155 +++++++++++++++++++++++++----------------------- > 1 file changed, 82 insertions(+), 73 deletions(-) > > diff --git a/malloc/malloc.c b/malloc/malloc.c > index 1a2c798a35..4bfcea286f 100644 > --- a/malloc/malloc.c > +++ b/malloc/malloc.c > @@ -2414,6 +2414,85 @@ do_check_malloc_state (mstate av) > be extended or replaced. > */ > > +static void * > +sysmalloc_mmap (INTERNAL_SIZE_T nb, size_t pagesize, int extra_flags, mstate av) > +{ > + long int size; > + > + /* > + Round up size to nearest page. For mmapped chunks, the overhead is one > + SIZE_SZ unit larger than for normal chunks, because there is no > + following chunk whose prev_size field could be used. > + > + See the front_misalign handling below, for glibc there is no need for > + further alignments unless we have have high alignment. > + */ > + if (MALLOC_ALIGNMENT == CHUNK_HDR_SZ) > + size = ALIGN_UP (nb + SIZE_SZ, pagesize); > + else > + size = ALIGN_UP (nb + SIZE_SZ + MALLOC_ALIGN_MASK, pagesize); > + > + /* Don't try if size wraps around 0. */ > + if ((unsigned long) (size) <= (unsigned long) (nb)) > + return MAP_FAILED; > + > + char *mm = (char *) MMAP (0, size, > + mtag_mmap_flags | PROT_READ | PROT_WRITE, > + extra_flags); > + if (mm == MAP_FAILED) > + return mm; > + > + sysmadvise_thp (mm, size); > + > + /* > + The offset to the start of the mmapped region is stored in the prev_size > + field of the chunk. This allows us to adjust returned start address to > + meet alignment requirements here and in memalign(), and still be able to > + compute proper address argument for later munmap in free() and realloc(). > + */ > + > + INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of new space */ > + > + if (MALLOC_ALIGNMENT == CHUNK_HDR_SZ) > + { > + /* For glibc, chunk2mem increases the address by CHUNK_HDR_SZ and > + MALLOC_ALIGN_MASK is CHUNK_HDR_SZ-1. Each mmap'ed area is page > + aligned and therefore definitely MALLOC_ALIGN_MASK-aligned. */ > + assert (((INTERNAL_SIZE_T) chunk2mem (mm) & MALLOC_ALIGN_MASK) == 0); > + front_misalign = 0; > + } > + else > + front_misalign = (INTERNAL_SIZE_T) chunk2mem (mm) & MALLOC_ALIGN_MASK; > + > + mchunkptr p; /* the allocated/returned chunk */ > + > + if (front_misalign > 0) > + { > + ptrdiff_t correction = MALLOC_ALIGNMENT - front_misalign; > + p = (mchunkptr) (mm + correction); > + set_prev_size (p, correction); > + set_head (p, (size - correction) | IS_MMAPPED); > + } > + else > + { > + p = (mchunkptr) mm; > + set_prev_size (p, 0); > + set_head (p, size | IS_MMAPPED); > + } > + > + /* update statistics */ > + int new = atomic_exchange_and_add (&mp_.n_mmaps, 1) + 1; > + atomic_max (&mp_.max_n_mmaps, new); > + > + unsigned long sum; > + sum = atomic_exchange_and_add (&mp_.mmapped_mem, size) + size; > + atomic_max (&mp_.max_mmapped_mem, sum); > + > + check_chunk (av, p); > + > + return chunk2mem (p); > +} > + > static void * > sysmalloc (INTERNAL_SIZE_T nb, mstate av) > { > @@ -2451,81 +2530,11 @@ sysmalloc (INTERNAL_SIZE_T nb, mstate av) > || ((unsigned long) (nb) >= (unsigned long) (mp_.mmap_threshold) > && (mp_.n_mmaps < mp_.n_mmaps_max))) > { > - char *mm; /* return value from mmap call*/ > - > try_mmap: This is a great opportunity to get rid of this goto. > - /* > - Round up size to nearest page. For mmapped chunks, the overhead > - is one SIZE_SZ unit larger than for normal chunks, because there > - is no following chunk whose prev_size field could be used. > - > - See the front_misalign handling below, for glibc there is no > - need for further alignments unless we have have high alignment. > - */ > - if (MALLOC_ALIGNMENT == CHUNK_HDR_SZ) > - size = ALIGN_UP (nb + SIZE_SZ, pagesize); > - else > - size = ALIGN_UP (nb + SIZE_SZ + MALLOC_ALIGN_MASK, pagesize); > + char *mm = sysmalloc_mmap (nb, pagesize, 0, av); > + if (mm != MAP_FAILED) > + return mm; > tried_mmap = true; > - > - /* Don't try if size wraps around 0 */ > - if ((unsigned long) (size) > (unsigned long) (nb)) > - { > - mm = (char *) (MMAP (0, size, > - mtag_mmap_flags | PROT_READ | PROT_WRITE, 0)); > - > - if (mm != MAP_FAILED) > - { > - sysmadvise_thp (mm, size); > - > - /* > - The offset to the start of the mmapped region is stored > - in the prev_size field of the chunk. This allows us to adjust > - returned start address to meet alignment requirements here > - and in memalign(), and still be able to compute proper > - address argument for later munmap in free() and realloc(). > - */ > - > - if (MALLOC_ALIGNMENT == CHUNK_HDR_SZ) > - { > - /* For glibc, chunk2mem increases the address by > - CHUNK_HDR_SZ and MALLOC_ALIGN_MASK is > - CHUNK_HDR_SZ-1. Each mmap'ed area is page > - aligned and therefore definitely > - MALLOC_ALIGN_MASK-aligned. */ > - assert (((INTERNAL_SIZE_T) chunk2mem (mm) & MALLOC_ALIGN_MASK) == 0); > - front_misalign = 0; > - } > - else > - front_misalign = (INTERNAL_SIZE_T) chunk2mem (mm) & MALLOC_ALIGN_MASK; > - if (front_misalign > 0) > - { > - correction = MALLOC_ALIGNMENT - front_misalign; > - p = (mchunkptr) (mm + correction); > - set_prev_size (p, correction); > - set_head (p, (size - correction) | IS_MMAPPED); > - } > - else > - { > - p = (mchunkptr) mm; > - set_prev_size (p, 0); > - set_head (p, size | IS_MMAPPED); > - } > - > - /* update statistics */ > - > - int new = atomic_exchange_and_add (&mp_.n_mmaps, 1) + 1; > - atomic_max (&mp_.max_n_mmaps, new); > - > - unsigned long sum; > - sum = atomic_exchange_and_add (&mp_.mmapped_mem, size) + size; > - atomic_max (&mp_.max_mmapped_mem, sum); > - > - check_chunk (av, p); > - > - return chunk2mem (p); > - } > - } > } > > /* There are no usable arenas and mmap also failed. */ >