From: "H.J. Lu" <hjl.tools@gmail.com>
To: Rongwei Wang <rongwei.wang@linux.alibaba.com>
Cc: GNU C Library <libc-alpha@sourceware.org>,
Florian Weimer <fweimer@redhat.com>,
xuyu@linux.alibaba.com, gavin.dg@linux.alibaba.com
Subject: [PATCH v3] elf: Properly align PT_LOAD segments [BZ #28676]
Date: Thu, 9 Dec 2021 10:04:32 -0800 [thread overview]
Message-ID: <YbJFMFz5Ni7tzDMm@gmail.com> (raw)
In-Reply-To: <CAMe9rOqGKrpTpX0NXB5M6Ac--sxKA-XOhTcZUCwU2cUfHb3bLw@mail.gmail.com>
On Thu, Dec 09, 2021 at 07:14:10AM -0800, H.J. Lu wrote:
> On Wed, Dec 8, 2021 at 9:57 PM Rongwei Wang
> <rongwei.wang@linux.alibaba.com> wrote:
> >
> > Now, ld.so always map the LOAD segments and aligned by base
> > page size (e.g. 4k in x86 or 4k, 16k and 64k in arm64). This
> > is a bug, and had been reported:
> >
> > https://sourceware.org/bugzilla/show_bug.cgi?id=28676
> >
> > This patch mainly to fix it. In this patch, ld.so can align
> > the mapping address of the first LOAD segment with p_align
> > when p_align is greater than the current base page size.
> >
> > A testcase:
> > main.c:
> >
> > extern void dso_test(void);
> > int main(void)
> > {
> > dso_test();
> > getchar();
> >
> > return 0;
> > }
> >
> > load.c, used to generate libload.so:
> >
> > int foo __attribute__((aligned(0x200000))) = 1;
> > void dso_test(void)
> > {
> > printf("dso test\n");
> > printf("foo: %p\n", &foo);
> > }
> >
> > The steps:
> > $ gcc -O2 -fPIC -c -o load.o load.c
> > $ gcc -shared -Wl,-z,max-page-size=0x200000 -o libload.so load.o
> > $ gcc -no-pie -Wl,-z,max-page-size=0x200000 -O2 -o dso main.c libload.so -Wl,-R,.
> >
> > Before fixing:
> > $ ./dso
> > dso test
> > foo: 0xffff88ae2000
> >
> > After fixed:
> > $ ./dso
> > dso test
> > foo: 0xffff9e000000
> >
> > And this fix can help code segments use huge pages become
> > simple and available.
>
> Please include a testcase, like
>
> https://gitlab.com/x86-glibc/glibc/-/commits/users/hjl/pr28676/master
>
> > Signed-off-by: Xu Yu <xuyu@linux.alibaba.com>
> > Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com>
> > ---
> > elf/dl-load.c | 1 +
> > elf/dl-map-segments.h | 63 +++++++++++++++++++++++++++++++++++++++----
> > include/link.h | 3 +++
> > 3 files changed, 62 insertions(+), 5 deletions(-)
> >
> > diff --git a/elf/dl-load.c b/elf/dl-load.c
> > index e39980fb19..136cfe2fa8 100644
> > --- a/elf/dl-load.c
> > +++ b/elf/dl-load.c
> > @@ -1154,6 +1154,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd,
> > c->dataend = ph->p_vaddr + ph->p_filesz;
> > c->allocend = ph->p_vaddr + ph->p_memsz;
> > c->mapoff = ALIGN_DOWN (ph->p_offset, GLRO(dl_pagesize));
> > + l->l_load_align = ph->p_align;
>
> Can you add an alignment field to
>
> /* This structure describes one PT_LOAD command.
> Its details have been expanded out and converted. */
> struct loadcmd
> {
> ElfW(Addr) mapstart, mapend, dataend, allocend;
> ElfW(Off) mapoff;
> int prot; /* PROT_* bits. */
> };
>
> instead?
>
Hi,
I updated your patch. Please take a look.
H.J.
--
When PT_LOAD segment alignment > the page size, allocate enough space
to ensure that the segment can be properly aligned.
This fixes [BZ #28676].
---
elf/dl-load.c | 1 +
elf/dl-load.h | 2 +-
elf/dl-map-segments.h | 51 +++++++++++++++++++++++++++++++++++++++----
3 files changed, 49 insertions(+), 5 deletions(-)
diff --git a/elf/dl-load.c b/elf/dl-load.c
index bf8957e73c..9a23590bf4 100644
--- a/elf/dl-load.c
+++ b/elf/dl-load.c
@@ -1150,6 +1150,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd,
c->mapend = ALIGN_UP (ph->p_vaddr + ph->p_filesz, GLRO(dl_pagesize));
c->dataend = ph->p_vaddr + ph->p_filesz;
c->allocend = ph->p_vaddr + ph->p_memsz;
+ c->mapalign = ph->p_align;
c->mapoff = ALIGN_DOWN (ph->p_offset, GLRO(dl_pagesize));
/* Determine whether there is a gap between the last segment
diff --git a/elf/dl-load.h b/elf/dl-load.h
index e329d49a81..c121e3456c 100644
--- a/elf/dl-load.h
+++ b/elf/dl-load.h
@@ -74,7 +74,7 @@ ELF_PREFERRED_ADDRESS_DATA;
Its details have been expanded out and converted. */
struct loadcmd
{
- ElfW(Addr) mapstart, mapend, dataend, allocend;
+ ElfW(Addr) mapstart, mapend, dataend, allocend, mapalign;
ElfW(Off) mapoff;
int prot; /* PROT_* bits. */
};
diff --git a/elf/dl-map-segments.h b/elf/dl-map-segments.h
index f9fb110ee3..f147ec232f 100644
--- a/elf/dl-map-segments.h
+++ b/elf/dl-map-segments.h
@@ -18,6 +18,52 @@
#include <dl-load.h>
+/* Map a segment and align it properly. */
+
+static __always_inline ElfW(Addr)
+_dl_map_segment (const struct loadcmd *c, ElfW(Addr) mappref,
+ const size_t maplength, int fd)
+{
+ if (c->mapalign > GLRO(dl_pagesize))
+ {
+ /* If the segment alignment > the page size, allocate enough space
+ to ensure that the segment can be properly aligned. */
+ ElfW(Addr) maplen = (maplength >= c->mapalign
+ ? (maplength + c->mapalign)
+ : (2 * c->mapalign));
+ ElfW(Addr) map_start
+ = (ElfW(Addr)) __mmap ((void *) mappref, maplen,
+ PROT_NONE, MAP_ANONYMOUS|MAP_PRIVATE,
+ -1, 0);
+ if (__glibc_unlikely ((void *) map_start == MAP_FAILED))
+ return map_start;
+
+ ElfW(Addr) map_start_aligned = ALIGN_UP (map_start, c->mapalign);
+ ElfW(Addr) map_end = map_start_aligned + maplength;
+ map_start_aligned
+ = (ElfW(Addr)) __mmap ((void *) map_start_aligned,
+ maplength, c->prot,
+ MAP_COPY|MAP_FILE|MAP_FIXED,
+ fd, c->mapoff);
+ if (__glibc_likely ((void *) map_start_aligned != MAP_FAILED))
+ {
+ /* Unmap the unused regions. */
+ ElfW(Addr) delta = map_start_aligned - map_start;
+ if (delta)
+ __munmap ((void *) map_start, delta);
+ delta = map_start + maplen - map_end;
+ if (delta)
+ __munmap ((void *) map_end, delta);
+ }
+
+ return map_start_aligned;
+ }
+
+ return (ElfW(Addr)) __mmap ((void *) mappref, maplength,
+ c->prot, MAP_COPY|MAP_FILE,
+ fd, c->mapoff);
+}
+
/* This implementation assumes (as does the corresponding implementation
of _dl_unmap_segments, in dl-unmap-segments.h) that shared objects
are always laid out with all segments contiguous (or with gaps
@@ -53,10 +99,7 @@ _dl_map_segments (struct link_map *l, int fd,
- MAP_BASE_ADDR (l));
/* Remember which part of the address space this object uses. */
- l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength,
- c->prot,
- MAP_COPY|MAP_FILE,
- fd, c->mapoff);
+ l->l_map_start = _dl_map_segment (c, mappref, maplength, fd);
if (__glibc_unlikely ((void *) l->l_map_start == MAP_FAILED))
return DL_MAP_SEGMENTS_ERROR_MAP_SEGMENT;
--
2.33.1
next prev parent reply other threads:[~2021-12-09 18:04 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-12-09 5:57 [PATCH v2 0/1] fix p_align on PT_LOAD segment in DSO isn't honored Rongwei Wang
2021-12-09 5:57 ` [PATCH v2 1/1] elf: align the mapping address of LOAD segments with p_align Rongwei Wang
2021-12-09 15:14 ` H.J. Lu
2021-12-09 18:04 ` H.J. Lu [this message]
2021-12-09 19:29 ` [PATCH v4] elf: Properly align PT_LOAD segments [BZ #28676] H.J. Lu
2021-12-10 1:58 ` Rongwei Wang
2021-12-10 2:24 ` H.J. Lu
2021-12-10 2:34 ` Rongwei Wang
2021-12-09 6:36 ` [PATCH v2 0/1] fix p_align on PT_LOAD segment in DSO isn't honored Rongwei Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YbJFMFz5Ni7tzDMm@gmail.com \
--to=hjl.tools@gmail.com \
--cc=fweimer@redhat.com \
--cc=gavin.dg@linux.alibaba.com \
--cc=libc-alpha@sourceware.org \
--cc=rongwei.wang@linux.alibaba.com \
--cc=xuyu@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).