From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by sourceware.org (Postfix) with ESMTPS id 78CEE3857C60 for ; Fri, 10 Dec 2021 15:44:33 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 78CEE3857C60 Received: by mail-pg1-x52f.google.com with SMTP id a23so3801028pgm.4 for ; Fri, 10 Dec 2021 07:44:33 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Ke/LL4YooBTzfV03g4exPNdmZMNmGcvilv6mnakc0mI=; b=TPf91v9EshjkwvjBhSi5krjo7XRk9Mx+pEj/04PxkOykvYQmvEG4IAhDV0cb3HhGEl F6yLpTdiqYVVX2v3mQEUca2jmlqWaqvW/jJmBEdmvGkgqFBGVWANNTeN+sYCc9YyyrEW gjj0Cmnl/VTTHbMd9YpOukyrmO6x8HsJ8OQ/oNlungZAENOamRxCJIlItXh+P2S4/6Oy ikH5xJF4LZfE1VFcG7pV2aXb+nmPWqrfuJ+ApiBrzPSEam3PxutLA7WSZH0qlFsiUE8c PU1asflY95tdO0KEd/5Q9xGM+kIndofMv4B9BYGxG1nK9HGOKlFVPwf/x6+IICR5f7oz dMxQ== X-Gm-Message-State: AOAM531/fsOXcweSilfBo1nb9f2v3Ozt/Y13gUuyJklX0fdJA5y/vr+B SZw/V2Ii4qkYtmFhRzPVFFC67KDEI4g2vZG/KsmqQu1YisE= X-Google-Smtp-Source: ABdhPJzWhDny5P5HWAfSeFWcai886SO2p/xl5KJ4XZnVa6zOJXk5QVWMmW9840WatgICxEQjIqFM5GPk/iWM465yAc0= X-Received: by 2002:a05:6a00:8cd:b0:4a2:82d7:1703 with SMTP id s13-20020a056a0008cd00b004a282d71703mr19022601pfu.43.1639151072572; Fri, 10 Dec 2021 07:44:32 -0800 (PST) MIME-Version: 1.0 References: <20211204045848.71105-1-rongwei.wang@linux.alibaba.com> <20211210123911.86568-1-rongwei.wang@linux.alibaba.com> <20211210123911.86568-2-rongwei.wang@linux.alibaba.com> In-Reply-To: <20211210123911.86568-2-rongwei.wang@linux.alibaba.com> From: "H.J. Lu" Date: Fri, 10 Dec 2021 07:43:56 -0800 Message-ID: Subject: Re: [PATCH v5 1/2] elf: Properly align PT_LOAD segments To: Rongwei Wang Cc: GNU C Library , Florian Weimer , xuyu@linux.alibaba.com, gavin.dg@linux.alibaba.com Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-3029.0 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 10 Dec 2021 15:44:35 -0000 On Fri, Dec 10, 2021 at 4:39 AM Rongwei Wang wrote: > > When PT_LOAD segment alignment > the page size, allocate > enough space to ensure that the segment can be properly > aligned. > > And this fix can help code segments use huge pages become > simple and available. > > This fixes [BZ #28676]. > > Signed-off-by: Xu Yu > Signed-off-by: Rongwei Wang > --- > elf/dl-load.c | 1 + > elf/dl-load.h | 2 +- > elf/dl-map-segments.h | 49 +++++++++++++++++++++++++++++++++++++++---- > 3 files changed, 47 insertions(+), 5 deletions(-) > > diff --git a/elf/dl-load.c b/elf/dl-load.c > index bf8957e73c..9a23590bf4 100644 > --- a/elf/dl-load.c > +++ b/elf/dl-load.c > @@ -1150,6 +1150,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd, > c->mapend = ALIGN_UP (ph->p_vaddr + ph->p_filesz, GLRO(dl_pagesize)); > c->dataend = ph->p_vaddr + ph->p_filesz; > c->allocend = ph->p_vaddr + ph->p_memsz; > + c->mapalign = ph->p_align; > c->mapoff = ALIGN_DOWN (ph->p_offset, GLRO(dl_pagesize)); > > /* Determine whether there is a gap between the last segment > diff --git a/elf/dl-load.h b/elf/dl-load.h > index e329d49a81..c121e3456c 100644 > --- a/elf/dl-load.h > +++ b/elf/dl-load.h > @@ -74,7 +74,7 @@ ELF_PREFERRED_ADDRESS_DATA; > Its details have been expanded out and converted. */ > struct loadcmd > { > - ElfW(Addr) mapstart, mapend, dataend, allocend; > + ElfW(Addr) mapstart, mapend, dataend, allocend, mapalign; > ElfW(Off) mapoff; > int prot; /* PROT_* bits. */ > }; > diff --git a/elf/dl-map-segments.h b/elf/dl-map-segments.h > index f9fb110ee3..74abf324ed 100644 > --- a/elf/dl-map-segments.h > +++ b/elf/dl-map-segments.h > @@ -18,6 +18,50 @@ > > #include > > +/* Map a segment and align it properly. */ > + > +static __always_inline ElfW(Addr) > +_dl_map_segment (const struct loadcmd *c, ElfW(Addr) mappref, > + const size_t maplength, int fd) > +{ > + if (__glibc_likely (c->mapalign <= GLRO(dl_pagesize))) > + return (ElfW(Addr)) __mmap ((void *) mappref, maplength, c->prot, > + MAP_COPY|MAP_FILE, fd, c->mapoff); > + > + /* If the segment alignment > the page size, allocate enough space to > + ensure that the segment can be properly aligned. */ > + ElfW(Addr) maplen = (maplength >= c->mapalign > + ? (maplength + c->mapalign) > + : (2 * c->mapalign)); > + ElfW(Addr) map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplen, > + PROT_NONE, > + MAP_ANONYMOUS|MAP_PRIVATE, > + -1, 0); > + if (__glibc_unlikely ((void *) map_start == MAP_FAILED)) > + return map_start; > + > + ElfW(Addr) map_start_aligned = ALIGN_UP (map_start, c->mapalign); > + map_start_aligned = (ElfW(Addr)) __mmap ((void *) map_start_aligned, > + maplength, c->prot, > + MAP_COPY|MAP_FILE|MAP_FIXED, > + fd, c->mapoff); > + if (__glibc_unlikely ((void *) map_start_aligned == MAP_FAILED)) > + __munmap ((void *) map_start, maplen); > + else > + { > + /* Unmap the unused regions. */ > + ElfW(Addr) delta = map_start_aligned - map_start; > + if (delta) > + __munmap ((void *) map_start, delta); > + ElfW(Addr) map_end = map_start_aligned + maplength; > + delta = map_start + maplen - map_end; > + if (delta) > + __munmap ((void *) map_end, delta); > + } > + > + return map_start_aligned; > +} > + > /* This implementation assumes (as does the corresponding implementation > of _dl_unmap_segments, in dl-unmap-segments.h) that shared objects > are always laid out with all segments contiguous (or with gaps > @@ -53,10 +97,7 @@ _dl_map_segments (struct link_map *l, int fd, > - MAP_BASE_ADDR (l)); > > /* Remember which part of the address space this object uses. */ > - l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, > - c->prot, > - MAP_COPY|MAP_FILE, > - fd, c->mapoff); > + l->l_map_start = _dl_map_segment (c, mappref, maplength, fd); > if (__glibc_unlikely ((void *) l->l_map_start == MAP_FAILED)) > return DL_MAP_SEGMENTS_ERROR_MAP_SEGMENT; > > -- > 2.27.0 > LGTM. Reviewed-by: H.J. Lu I will check it for you. Thanks. -- H.J.