* [PATCH v2 0/1] fix p_align on PT_LOAD segment in DSO isn't honored
@ 2021-12-09 5:57 Rongwei Wang
2021-12-09 5:57 ` [PATCH v2 1/1] elf: align the mapping address of LOAD segments with p_align Rongwei Wang
2021-12-09 6:36 ` [PATCH v2 0/1] fix p_align on PT_LOAD segment in DSO isn't honored Rongwei Wang
0 siblings, 2 replies; 10+ messages in thread
From: Rongwei Wang @ 2021-12-09 5:57 UTC (permalink / raw)
To: libc-alpha, hjl.tools, fweimer; +Cc: xuyu, gavin.dg
Hi
This patch mainly to fix a reported bug:
"p_align on PT_LOAD segment in DSO isn't honored"
https://sourceware.org/bugzilla/show_bug.cgi?id=28676
A testcase had been builded by H.J.Lu:
https://sourceware.org/bugzilla/attachment.cgi?id=13838
And in Patch 1/1, I also show a simple testcase which
modified from H.J.Lu.
And a similar bug in ELF binary was also reported:
https://bugzilla.kernel.org/show_bug.cgi?id=215275
A related fix patch could been found:
https://lore.kernel.org/linux-mm/20211009092658.59665-4-rongwei.wang@linux.alibaba.com/
Originally, we send this patch for introducing .text
hugepages, was not aware of it's bug. And now, I am
not sure whether kernel maintainer will regards it as
a bug. But it deserved try again.
Thanks.
Changelog:
RFC/v1 -> v2
- Patch "elf: align the mapping address of LOAD segments with p_align"
fix coding format and add testcase in commit.
RFC link:
https://patchwork.sourceware.org/project/glibc/patch/20211204045848.71105-2-rongwei.wang@linux.alibaba.com/
Rongwei Wang (1):
elf: align the mapping address of LOAD segments with p_align
elf/dl-load.c | 1 +
elf/dl-map-segments.h | 63 +++++++++++++++++++++++++++++++++++++++----
include/link.h | 3 +++
3 files changed, 62 insertions(+), 5 deletions(-)
--
2.27.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v2 1/1] elf: align the mapping address of LOAD segments with p_align
2021-12-09 5:57 [PATCH v2 0/1] fix p_align on PT_LOAD segment in DSO isn't honored Rongwei Wang
@ 2021-12-09 5:57 ` Rongwei Wang
2021-12-09 15:14 ` H.J. Lu
2021-12-09 6:36 ` [PATCH v2 0/1] fix p_align on PT_LOAD segment in DSO isn't honored Rongwei Wang
1 sibling, 1 reply; 10+ messages in thread
From: Rongwei Wang @ 2021-12-09 5:57 UTC (permalink / raw)
To: libc-alpha, hjl.tools, fweimer; +Cc: xuyu, gavin.dg
Now, ld.so always map the LOAD segments and aligned by base
page size (e.g. 4k in x86 or 4k, 16k and 64k in arm64). This
is a bug, and had been reported:
https://sourceware.org/bugzilla/show_bug.cgi?id=28676
This patch mainly to fix it. In this patch, ld.so can align
the mapping address of the first LOAD segment with p_align
when p_align is greater than the current base page size.
A testcase:
main.c:
extern void dso_test(void);
int main(void)
{
dso_test();
getchar();
return 0;
}
load.c, used to generate libload.so:
int foo __attribute__((aligned(0x200000))) = 1;
void dso_test(void)
{
printf("dso test\n");
printf("foo: %p\n", &foo);
}
The steps:
$ gcc -O2 -fPIC -c -o load.o load.c
$ gcc -shared -Wl,-z,max-page-size=0x200000 -o libload.so load.o
$ gcc -no-pie -Wl,-z,max-page-size=0x200000 -O2 -o dso main.c libload.so -Wl,-R,.
Before fixing:
$ ./dso
dso test
foo: 0xffff88ae2000
After fixed:
$ ./dso
dso test
foo: 0xffff9e000000
And this fix can help code segments use huge pages become
simple and available.
Signed-off-by: Xu Yu <xuyu@linux.alibaba.com>
Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com>
---
elf/dl-load.c | 1 +
elf/dl-map-segments.h | 63 +++++++++++++++++++++++++++++++++++++++----
include/link.h | 3 +++
3 files changed, 62 insertions(+), 5 deletions(-)
diff --git a/elf/dl-load.c b/elf/dl-load.c
index e39980fb19..136cfe2fa8 100644
--- a/elf/dl-load.c
+++ b/elf/dl-load.c
@@ -1154,6 +1154,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd,
c->dataend = ph->p_vaddr + ph->p_filesz;
c->allocend = ph->p_vaddr + ph->p_memsz;
c->mapoff = ALIGN_DOWN (ph->p_offset, GLRO(dl_pagesize));
+ l->l_load_align = ph->p_align;
/* Determine whether there is a gap between the last segment
and this one. */
diff --git a/elf/dl-map-segments.h b/elf/dl-map-segments.h
index ac9f09ab4c..fad98eb984 100644
--- a/elf/dl-map-segments.h
+++ b/elf/dl-map-segments.h
@@ -18,6 +18,48 @@
#include <dl-load.h>
+static __always_inline void *
+_dl_map_segments_align (const struct loadcmd *c,
+ ElfW(Addr) mappref, int fd, size_t alignment,
+ const size_t maplength)
+{
+ unsigned long map_start, map_start_align, map_end;
+ unsigned long maplen = (maplength >= alignment) ?
+ (maplength + alignment) : (2 * alignment);
+
+ /* Allocate enough space to ensure that address aligned by
+ p_align is included. */
+ map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplen,
+ PROT_NONE,
+ MAP_ANONYMOUS | MAP_PRIVATE,
+ -1, 0);
+ if (__glibc_unlikely ((void *) map_start == MAP_FAILED))
+ {
+ /* If mapping a aligned address failed, then ... */
+ map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength,
+ c->prot,
+ MAP_COPY|MAP_FILE,
+ fd, c->mapoff);
+
+ return (void *) map_start;
+ }
+ map_start_align = ALIGN_UP(map_start, alignment);
+ map_end = map_start_align + maplength;
+
+ /* Remember which part of the address space this object uses. */
+ map_start_align = (ElfW(Addr)) __mmap ((void *) map_start_align, maplength,
+ c->prot,
+ MAP_COPY|MAP_FILE|MAP_FIXED,
+ fd, c->mapoff);
+ if (__glibc_unlikely ((void *) map_start_align == MAP_FAILED))
+ return MAP_FAILED;
+ if (map_start_align > map_start)
+ __munmap((void *)map_start, map_start_align - map_start);
+ __munmap((void *)map_end, map_start + maplen - map_end);
+
+ return (void *) map_start_align;
+}
+
/* This implementation assumes (as does the corresponding implementation
of _dl_unmap_segments, in dl-unmap-segments.h) that shared objects
are always laid out with all segments contiguous (or with gaps
@@ -52,11 +94,22 @@ _dl_map_segments (struct link_map *l, int fd,
c->mapstart & GLRO(dl_use_load_bias))
- MAP_BASE_ADDR (l));
- /* Remember which part of the address space this object uses. */
- l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength,
- c->prot,
- MAP_COPY|MAP_FILE,
- fd, c->mapoff);
+ /* During mapping, align the mapping address of the LOAD segments
+ according to own p_align. This helps OS map its code segment to
+ huge pages. */
+ if (l->l_load_align > GLRO(dl_pagesize))
+ {
+ l->l_map_start = (ElfW(Addr)) _dl_map_segments_align (c, mappref,
+ fd, l->l_load_align, maplength);
+ }
+ else
+ {
+ /* Remember which part of the address space this object uses. */
+ l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength,
+ c->prot,
+ MAP_COPY|MAP_FILE,
+ fd, c->mapoff);
+ }
if (__glibc_unlikely ((void *) l->l_map_start == MAP_FAILED))
return DL_MAP_SEGMENTS_ERROR_MAP_SEGMENT;
diff --git a/include/link.h b/include/link.h
index aea268439c..fc6ce29fab 100644
--- a/include/link.h
+++ b/include/link.h
@@ -298,6 +298,9 @@ struct link_map
/* Thread-local storage related info. */
+ /* Alignment requirement of the LOAD block. */
+ size_t l_load_align;
+
/* Start of the initialization image. */
void *l_tls_initimage;
/* Size of the initialization image. */
--
2.27.0
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2 0/1] fix p_align on PT_LOAD segment in DSO isn't honored
2021-12-09 5:57 [PATCH v2 0/1] fix p_align on PT_LOAD segment in DSO isn't honored Rongwei Wang
2021-12-09 5:57 ` [PATCH v2 1/1] elf: align the mapping address of LOAD segments with p_align Rongwei Wang
@ 2021-12-09 6:36 ` Rongwei Wang
1 sibling, 0 replies; 10+ messages in thread
From: Rongwei Wang @ 2021-12-09 6:36 UTC (permalink / raw)
To: 20211204045848.71105-1-rongwei.wang, libc-alpha, hjl.tools, fweimer
Cc: xuyu, gavin.dg
Sorry, I took a mistake, sent this patch twice.
The patch twice sent were completely same. Just read one patch
is OK.
Sorry for this mistake again.
On 12/9/21 1:57 PM, Rongwei Wang via Libc-alpha wrote:
> Hi
>
> This patch mainly to fix a reported bug:
>
> "p_align on PT_LOAD segment in DSO isn't honored"
> https://sourceware.org/bugzilla/show_bug.cgi?id=28676
>
> A testcase had been builded by H.J.Lu:
> https://sourceware.org/bugzilla/attachment.cgi?id=13838
>
> And in Patch 1/1, I also show a simple testcase which
> modified from H.J.Lu.
>
> And a similar bug in ELF binary was also reported:
> https://bugzilla.kernel.org/show_bug.cgi?id=215275
>
> A related fix patch could been found:
> https://lore.kernel.org/linux-mm/20211009092658.59665-4-rongwei.wang@linux.alibaba.com/
> Originally, we send this patch for introducing .text
> hugepages, was not aware of it's bug. And now, I am
> not sure whether kernel maintainer will regards it as
> a bug. But it deserved try again.
>
> Thanks.
>
> Changelog:
>
> RFC/v1 -> v2
>
> - Patch "elf: align the mapping address of LOAD segments with p_align"
> fix coding format and add testcase in commit.
>
> RFC link:
> https://patchwork.sourceware.org/project/glibc/patch/20211204045848.71105-2-rongwei.wang@linux.alibaba.com/
>
> Rongwei Wang (1):
> elf: align the mapping address of LOAD segments with p_align
>
> elf/dl-load.c | 1 +
> elf/dl-map-segments.h | 63 +++++++++++++++++++++++++++++++++++++++----
> include/link.h | 3 +++
> 3 files changed, 62 insertions(+), 5 deletions(-)
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v2 1/1] elf: align the mapping address of LOAD segments with p_align
2021-12-09 5:57 ` [PATCH v2 1/1] elf: align the mapping address of LOAD segments with p_align Rongwei Wang
@ 2021-12-09 15:14 ` H.J. Lu
2021-12-09 18:04 ` [PATCH v3] elf: Properly align PT_LOAD segments [BZ #28676] H.J. Lu
0 siblings, 1 reply; 10+ messages in thread
From: H.J. Lu @ 2021-12-09 15:14 UTC (permalink / raw)
To: 20211204045848.71105-1-rongwei.wang
Cc: GNU C Library, Florian Weimer, xuyu, gavin.dg
On Wed, Dec 8, 2021 at 9:57 PM Rongwei Wang
<rongwei.wang@linux.alibaba.com> wrote:
>
> Now, ld.so always map the LOAD segments and aligned by base
> page size (e.g. 4k in x86 or 4k, 16k and 64k in arm64). This
> is a bug, and had been reported:
>
> https://sourceware.org/bugzilla/show_bug.cgi?id=28676
>
> This patch mainly to fix it. In this patch, ld.so can align
> the mapping address of the first LOAD segment with p_align
> when p_align is greater than the current base page size.
>
> A testcase:
> main.c:
>
> extern void dso_test(void);
> int main(void)
> {
> dso_test();
> getchar();
>
> return 0;
> }
>
> load.c, used to generate libload.so:
>
> int foo __attribute__((aligned(0x200000))) = 1;
> void dso_test(void)
> {
> printf("dso test\n");
> printf("foo: %p\n", &foo);
> }
>
> The steps:
> $ gcc -O2 -fPIC -c -o load.o load.c
> $ gcc -shared -Wl,-z,max-page-size=0x200000 -o libload.so load.o
> $ gcc -no-pie -Wl,-z,max-page-size=0x200000 -O2 -o dso main.c libload.so -Wl,-R,.
>
> Before fixing:
> $ ./dso
> dso test
> foo: 0xffff88ae2000
>
> After fixed:
> $ ./dso
> dso test
> foo: 0xffff9e000000
>
> And this fix can help code segments use huge pages become
> simple and available.
Please include a testcase, like
https://gitlab.com/x86-glibc/glibc/-/commits/users/hjl/pr28676/master
> Signed-off-by: Xu Yu <xuyu@linux.alibaba.com>
> Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com>
> ---
> elf/dl-load.c | 1 +
> elf/dl-map-segments.h | 63 +++++++++++++++++++++++++++++++++++++++----
> include/link.h | 3 +++
> 3 files changed, 62 insertions(+), 5 deletions(-)
>
> diff --git a/elf/dl-load.c b/elf/dl-load.c
> index e39980fb19..136cfe2fa8 100644
> --- a/elf/dl-load.c
> +++ b/elf/dl-load.c
> @@ -1154,6 +1154,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd,
> c->dataend = ph->p_vaddr + ph->p_filesz;
> c->allocend = ph->p_vaddr + ph->p_memsz;
> c->mapoff = ALIGN_DOWN (ph->p_offset, GLRO(dl_pagesize));
> + l->l_load_align = ph->p_align;
Can you add an alignment field to
/* This structure describes one PT_LOAD command.
Its details have been expanded out and converted. */
struct loadcmd
{
ElfW(Addr) mapstart, mapend, dataend, allocend;
ElfW(Off) mapoff;
int prot; /* PROT_* bits. */
};
instead?
>
> /* Determine whether there is a gap between the last segment
> and this one. */
> diff --git a/elf/dl-map-segments.h b/elf/dl-map-segments.h
> index ac9f09ab4c..fad98eb984 100644
> --- a/elf/dl-map-segments.h
> +++ b/elf/dl-map-segments.h
> @@ -18,6 +18,48 @@
>
> #include <dl-load.h>
>
> +static __always_inline void *
> +_dl_map_segments_align (const struct loadcmd *c,
> + ElfW(Addr) mappref, int fd, size_t alignment,
> + const size_t maplength)
> +{
> + unsigned long map_start, map_start_align, map_end;
Use ElfW(Addr) instead of long.
> + unsigned long maplen = (maplength >= alignment) ?
> + (maplength + alignment) : (2 * alignment);
> +
> + /* Allocate enough space to ensure that address aligned by
> + p_align is included. */
> + map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplen,
> + PROT_NONE,
> + MAP_ANONYMOUS | MAP_PRIVATE,
> + -1, 0);
> + if (__glibc_unlikely ((void *) map_start == MAP_FAILED))
> + {
> + /* If mapping a aligned address failed, then ... */
an aligned
> + map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength,
> + c->prot,
> + MAP_COPY|MAP_FILE,
> + fd, c->mapoff);
> +
> + return (void *) map_start;
> + }
> + map_start_align = ALIGN_UP(map_start, alignment);
> + map_end = map_start_align + maplength;
> +
> + /* Remember which part of the address space this object uses. */
> + map_start_align = (ElfW(Addr)) __mmap ((void *) map_start_align, maplength,
> + c->prot,
> + MAP_COPY|MAP_FILE|MAP_FIXED,
> + fd, c->mapoff);
> + if (__glibc_unlikely ((void *) map_start_align == MAP_FAILED))
> + return MAP_FAILED;
> + if (map_start_align > map_start)
> + __munmap((void *)map_start, map_start_align - map_start);
> + __munmap((void *)map_end, map_start + maplen - map_end);
> +
> + return (void *) map_start_align;
> +}
> +
> /* This implementation assumes (as does the corresponding implementation
> of _dl_unmap_segments, in dl-unmap-segments.h) that shared objects
> are always laid out with all segments contiguous (or with gaps
> @@ -52,11 +94,22 @@ _dl_map_segments (struct link_map *l, int fd,
> c->mapstart & GLRO(dl_use_load_bias))
> - MAP_BASE_ADDR (l));
>
> - /* Remember which part of the address space this object uses. */
> - l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength,
> - c->prot,
> - MAP_COPY|MAP_FILE,
> - fd, c->mapoff);
> + /* During mapping, align the mapping address of the LOAD segments
> + according to own p_align. This helps OS map its code segment to
> + huge pages. */
> + if (l->l_load_align > GLRO(dl_pagesize))
> + {
> + l->l_map_start = (ElfW(Addr)) _dl_map_segments_align (c, mappref,
> + fd, l->l_load_align, maplength);
> + }
> + else
> + {
> + /* Remember which part of the address space this object uses. */
> + l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength,
> + c->prot,
> + MAP_COPY|MAP_FILE,
> + fd, c->mapoff);
> + }
Don't use {} for a single statement.
> if (__glibc_unlikely ((void *) l->l_map_start == MAP_FAILED))
> return DL_MAP_SEGMENTS_ERROR_MAP_SEGMENT;
>
> diff --git a/include/link.h b/include/link.h
> index aea268439c..fc6ce29fab 100644
> --- a/include/link.h
> +++ b/include/link.h
> @@ -298,6 +298,9 @@ struct link_map
>
> /* Thread-local storage related info. */
>
> + /* Alignment requirement of the LOAD block. */
> + size_t l_load_align;
> +
> /* Start of the initialization image. */
> void *l_tls_initimage;
> /* Size of the initialization image. */
> --
> 2.27.0
>
--
H.J.
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v3] elf: Properly align PT_LOAD segments [BZ #28676]
2021-12-09 15:14 ` H.J. Lu
@ 2021-12-09 18:04 ` H.J. Lu
2021-12-09 19:29 ` [PATCH v4] " H.J. Lu
0 siblings, 1 reply; 10+ messages in thread
From: H.J. Lu @ 2021-12-09 18:04 UTC (permalink / raw)
To: Rongwei Wang; +Cc: GNU C Library, Florian Weimer, xuyu, gavin.dg
On Thu, Dec 09, 2021 at 07:14:10AM -0800, H.J. Lu wrote:
> On Wed, Dec 8, 2021 at 9:57 PM Rongwei Wang
> <rongwei.wang@linux.alibaba.com> wrote:
> >
> > Now, ld.so always map the LOAD segments and aligned by base
> > page size (e.g. 4k in x86 or 4k, 16k and 64k in arm64). This
> > is a bug, and had been reported:
> >
> > https://sourceware.org/bugzilla/show_bug.cgi?id=28676
> >
> > This patch mainly to fix it. In this patch, ld.so can align
> > the mapping address of the first LOAD segment with p_align
> > when p_align is greater than the current base page size.
> >
> > A testcase:
> > main.c:
> >
> > extern void dso_test(void);
> > int main(void)
> > {
> > dso_test();
> > getchar();
> >
> > return 0;
> > }
> >
> > load.c, used to generate libload.so:
> >
> > int foo __attribute__((aligned(0x200000))) = 1;
> > void dso_test(void)
> > {
> > printf("dso test\n");
> > printf("foo: %p\n", &foo);
> > }
> >
> > The steps:
> > $ gcc -O2 -fPIC -c -o load.o load.c
> > $ gcc -shared -Wl,-z,max-page-size=0x200000 -o libload.so load.o
> > $ gcc -no-pie -Wl,-z,max-page-size=0x200000 -O2 -o dso main.c libload.so -Wl,-R,.
> >
> > Before fixing:
> > $ ./dso
> > dso test
> > foo: 0xffff88ae2000
> >
> > After fixed:
> > $ ./dso
> > dso test
> > foo: 0xffff9e000000
> >
> > And this fix can help code segments use huge pages become
> > simple and available.
>
> Please include a testcase, like
>
> https://gitlab.com/x86-glibc/glibc/-/commits/users/hjl/pr28676/master
>
> > Signed-off-by: Xu Yu <xuyu@linux.alibaba.com>
> > Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com>
> > ---
> > elf/dl-load.c | 1 +
> > elf/dl-map-segments.h | 63 +++++++++++++++++++++++++++++++++++++++----
> > include/link.h | 3 +++
> > 3 files changed, 62 insertions(+), 5 deletions(-)
> >
> > diff --git a/elf/dl-load.c b/elf/dl-load.c
> > index e39980fb19..136cfe2fa8 100644
> > --- a/elf/dl-load.c
> > +++ b/elf/dl-load.c
> > @@ -1154,6 +1154,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd,
> > c->dataend = ph->p_vaddr + ph->p_filesz;
> > c->allocend = ph->p_vaddr + ph->p_memsz;
> > c->mapoff = ALIGN_DOWN (ph->p_offset, GLRO(dl_pagesize));
> > + l->l_load_align = ph->p_align;
>
> Can you add an alignment field to
>
> /* This structure describes one PT_LOAD command.
> Its details have been expanded out and converted. */
> struct loadcmd
> {
> ElfW(Addr) mapstart, mapend, dataend, allocend;
> ElfW(Off) mapoff;
> int prot; /* PROT_* bits. */
> };
>
> instead?
>
Hi,
I updated your patch. Please take a look.
H.J.
--
When PT_LOAD segment alignment > the page size, allocate enough space
to ensure that the segment can be properly aligned.
This fixes [BZ #28676].
---
elf/dl-load.c | 1 +
elf/dl-load.h | 2 +-
elf/dl-map-segments.h | 51 +++++++++++++++++++++++++++++++++++++++----
3 files changed, 49 insertions(+), 5 deletions(-)
diff --git a/elf/dl-load.c b/elf/dl-load.c
index bf8957e73c..9a23590bf4 100644
--- a/elf/dl-load.c
+++ b/elf/dl-load.c
@@ -1150,6 +1150,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd,
c->mapend = ALIGN_UP (ph->p_vaddr + ph->p_filesz, GLRO(dl_pagesize));
c->dataend = ph->p_vaddr + ph->p_filesz;
c->allocend = ph->p_vaddr + ph->p_memsz;
+ c->mapalign = ph->p_align;
c->mapoff = ALIGN_DOWN (ph->p_offset, GLRO(dl_pagesize));
/* Determine whether there is a gap between the last segment
diff --git a/elf/dl-load.h b/elf/dl-load.h
index e329d49a81..c121e3456c 100644
--- a/elf/dl-load.h
+++ b/elf/dl-load.h
@@ -74,7 +74,7 @@ ELF_PREFERRED_ADDRESS_DATA;
Its details have been expanded out and converted. */
struct loadcmd
{
- ElfW(Addr) mapstart, mapend, dataend, allocend;
+ ElfW(Addr) mapstart, mapend, dataend, allocend, mapalign;
ElfW(Off) mapoff;
int prot; /* PROT_* bits. */
};
diff --git a/elf/dl-map-segments.h b/elf/dl-map-segments.h
index f9fb110ee3..f147ec232f 100644
--- a/elf/dl-map-segments.h
+++ b/elf/dl-map-segments.h
@@ -18,6 +18,52 @@
#include <dl-load.h>
+/* Map a segment and align it properly. */
+
+static __always_inline ElfW(Addr)
+_dl_map_segment (const struct loadcmd *c, ElfW(Addr) mappref,
+ const size_t maplength, int fd)
+{
+ if (c->mapalign > GLRO(dl_pagesize))
+ {
+ /* If the segment alignment > the page size, allocate enough space
+ to ensure that the segment can be properly aligned. */
+ ElfW(Addr) maplen = (maplength >= c->mapalign
+ ? (maplength + c->mapalign)
+ : (2 * c->mapalign));
+ ElfW(Addr) map_start
+ = (ElfW(Addr)) __mmap ((void *) mappref, maplen,
+ PROT_NONE, MAP_ANONYMOUS|MAP_PRIVATE,
+ -1, 0);
+ if (__glibc_unlikely ((void *) map_start == MAP_FAILED))
+ return map_start;
+
+ ElfW(Addr) map_start_aligned = ALIGN_UP (map_start, c->mapalign);
+ ElfW(Addr) map_end = map_start_aligned + maplength;
+ map_start_aligned
+ = (ElfW(Addr)) __mmap ((void *) map_start_aligned,
+ maplength, c->prot,
+ MAP_COPY|MAP_FILE|MAP_FIXED,
+ fd, c->mapoff);
+ if (__glibc_likely ((void *) map_start_aligned != MAP_FAILED))
+ {
+ /* Unmap the unused regions. */
+ ElfW(Addr) delta = map_start_aligned - map_start;
+ if (delta)
+ __munmap ((void *) map_start, delta);
+ delta = map_start + maplen - map_end;
+ if (delta)
+ __munmap ((void *) map_end, delta);
+ }
+
+ return map_start_aligned;
+ }
+
+ return (ElfW(Addr)) __mmap ((void *) mappref, maplength,
+ c->prot, MAP_COPY|MAP_FILE,
+ fd, c->mapoff);
+}
+
/* This implementation assumes (as does the corresponding implementation
of _dl_unmap_segments, in dl-unmap-segments.h) that shared objects
are always laid out with all segments contiguous (or with gaps
@@ -53,10 +99,7 @@ _dl_map_segments (struct link_map *l, int fd,
- MAP_BASE_ADDR (l));
/* Remember which part of the address space this object uses. */
- l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength,
- c->prot,
- MAP_COPY|MAP_FILE,
- fd, c->mapoff);
+ l->l_map_start = _dl_map_segment (c, mappref, maplength, fd);
if (__glibc_unlikely ((void *) l->l_map_start == MAP_FAILED))
return DL_MAP_SEGMENTS_ERROR_MAP_SEGMENT;
--
2.33.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v4] elf: Properly align PT_LOAD segments [BZ #28676]
2021-12-09 18:04 ` [PATCH v3] elf: Properly align PT_LOAD segments [BZ #28676] H.J. Lu
@ 2021-12-09 19:29 ` H.J. Lu
2021-12-10 1:58 ` Rongwei Wang
0 siblings, 1 reply; 10+ messages in thread
From: H.J. Lu @ 2021-12-09 19:29 UTC (permalink / raw)
To: Rongwei Wang, GNU C Library, Florian Weimer, xuyu, gavin.dg
On Thu, Dec 09, 2021 at 10:04:32AM -0800, H.J. Lu wrote:
> On Thu, Dec 09, 2021 at 07:14:10AM -0800, H.J. Lu wrote:
>
> Hi,
>
> I updated your patch. Please take a look.
>
>
> H.J.
> --
> When PT_LOAD segment alignment > the page size, allocate enough space
> to ensure that the segment can be properly aligned.
>
> This fixes [BZ #28676].
H.J.
---
Changes in v4:
1. Call unmap when the second mmap fails.
When PT_LOAD segment alignment > the page size, allocate enough space
to ensure that the segment can be properly aligned.
This fixes [BZ #28676].
---
elf/dl-load.c | 1 +
elf/dl-load.h | 2 +-
elf/dl-map-segments.h | 49 +++++++++++++++++++++++++++++++++++++++----
3 files changed, 47 insertions(+), 5 deletions(-)
diff --git a/elf/dl-load.c b/elf/dl-load.c
index bf8957e73c..9a23590bf4 100644
--- a/elf/dl-load.c
+++ b/elf/dl-load.c
@@ -1150,6 +1150,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd,
c->mapend = ALIGN_UP (ph->p_vaddr + ph->p_filesz, GLRO(dl_pagesize));
c->dataend = ph->p_vaddr + ph->p_filesz;
c->allocend = ph->p_vaddr + ph->p_memsz;
+ c->mapalign = ph->p_align;
c->mapoff = ALIGN_DOWN (ph->p_offset, GLRO(dl_pagesize));
/* Determine whether there is a gap between the last segment
diff --git a/elf/dl-load.h b/elf/dl-load.h
index e329d49a81..c121e3456c 100644
--- a/elf/dl-load.h
+++ b/elf/dl-load.h
@@ -74,7 +74,7 @@ ELF_PREFERRED_ADDRESS_DATA;
Its details have been expanded out and converted. */
struct loadcmd
{
- ElfW(Addr) mapstart, mapend, dataend, allocend;
+ ElfW(Addr) mapstart, mapend, dataend, allocend, mapalign;
ElfW(Off) mapoff;
int prot; /* PROT_* bits. */
};
diff --git a/elf/dl-map-segments.h b/elf/dl-map-segments.h
index f9fb110ee3..f710cf2e48 100644
--- a/elf/dl-map-segments.h
+++ b/elf/dl-map-segments.h
@@ -18,6 +18,50 @@
#include <dl-load.h>
+/* Map a segment and align it properly. */
+
+static __always_inline ElfW(Addr)
+_dl_map_segment (const struct loadcmd *c, ElfW(Addr) mappref,
+ const size_t maplength, int fd)
+{
+ if (__glibc_likely (c->mapalign <= GLRO(dl_pagesize)))
+ return (ElfW(Addr)) __mmap ((void *) mappref, maplength, c->prot,
+ MAP_COPY|MAP_FILE, fd, c->mapoff);
+
+ /* If the segment alignment > the page size, allocate enough space to
+ ensure that the segment can be properly aligned. */
+ ElfW(Addr) maplen = (maplength >= c->mapalign
+ ? (maplength + c->mapalign)
+ : (2 * c->mapalign));
+ ElfW(Addr) map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplen,
+ PROT_NONE,
+ MAP_ANONYMOUS|MAP_PRIVATE,
+ -1, 0);
+ if (__glibc_unlikely ((void *) map_start == MAP_FAILED))
+ return map_start;
+
+ ElfW(Addr) map_start_aligned = ALIGN_UP (map_start, c->mapalign);
+ map_start_aligned = (ElfW(Addr)) __mmap ((void *) map_start_aligned,
+ maplength, c->prot,
+ MAP_COPY|MAP_FILE|MAP_FIXED,
+ fd, c->mapoff);
+ if (__glibc_unlikely ((void *) map_start == MAP_FAILED))
+ __munmap ((void *) map_start, maplen);
+ else
+ {
+ /* Unmap the unused regions. */
+ ElfW(Addr) delta = map_start_aligned - map_start;
+ if (delta)
+ __munmap ((void *) map_start, delta);
+ ElfW(Addr) map_end = map_start_aligned + maplength;
+ delta = map_start + maplen - map_end;
+ if (delta)
+ __munmap ((void *) map_end, delta);
+ }
+
+ return map_start_aligned;
+}
+
/* This implementation assumes (as does the corresponding implementation
of _dl_unmap_segments, in dl-unmap-segments.h) that shared objects
are always laid out with all segments contiguous (or with gaps
@@ -53,10 +97,7 @@ _dl_map_segments (struct link_map *l, int fd,
- MAP_BASE_ADDR (l));
/* Remember which part of the address space this object uses. */
- l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength,
- c->prot,
- MAP_COPY|MAP_FILE,
- fd, c->mapoff);
+ l->l_map_start = _dl_map_segment (c, mappref, maplength, fd);
if (__glibc_unlikely ((void *) l->l_map_start == MAP_FAILED))
return DL_MAP_SEGMENTS_ERROR_MAP_SEGMENT;
--
2.33.1
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v4] elf: Properly align PT_LOAD segments [BZ #28676]
2021-12-09 19:29 ` [PATCH v4] " H.J. Lu
@ 2021-12-10 1:58 ` Rongwei Wang
2021-12-10 2:24 ` H.J. Lu
0 siblings, 1 reply; 10+ messages in thread
From: Rongwei Wang @ 2021-12-10 1:58 UTC (permalink / raw)
To: H.J. Lu, GNU C Library, Florian Weimer, xuyu, gavin.dg
Hi, lhj
Thanks for your help! It's much better than my version.
And I find a small bug. you can check it again if available.
On 12/10/21 3:29 AM, H.J. Lu wrote:
> On Thu, Dec 09, 2021 at 10:04:32AM -0800, H.J. Lu wrote:
>> On Thu, Dec 09, 2021 at 07:14:10AM -0800, H.J. Lu wrote:
>>
>> Hi,
>>
>> I updated your patch. Please take a look.
>>
>>
>> H.J.
>> --
>> When PT_LOAD segment alignment > the page size, allocate enough space
>> to ensure that the segment can be properly aligned.
>>
>> This fixes [BZ #28676].
>
>
> H.J.
> ---
> Changes in v4:
>
> 1. Call unmap when the second mmap fails.
>
> When PT_LOAD segment alignment > the page size, allocate enough space
> to ensure that the segment can be properly aligned.
>
> This fixes [BZ #28676].
> ---
> elf/dl-load.c | 1 +
> elf/dl-load.h | 2 +-
> elf/dl-map-segments.h | 49 +++++++++++++++++++++++++++++++++++++++----
> 3 files changed, 47 insertions(+), 5 deletions(-)
>
> diff --git a/elf/dl-load.c b/elf/dl-load.c
> index bf8957e73c..9a23590bf4 100644
> --- a/elf/dl-load.c
> +++ b/elf/dl-load.c
> @@ -1150,6 +1150,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd,
> c->mapend = ALIGN_UP (ph->p_vaddr + ph->p_filesz, GLRO(dl_pagesize));
> c->dataend = ph->p_vaddr + ph->p_filesz;
> c->allocend = ph->p_vaddr + ph->p_memsz;
> + c->mapalign = ph->p_align;
> c->mapoff = ALIGN_DOWN (ph->p_offset, GLRO(dl_pagesize));
>
> /* Determine whether there is a gap between the last segment
> diff --git a/elf/dl-load.h b/elf/dl-load.h
> index e329d49a81..c121e3456c 100644
> --- a/elf/dl-load.h
> +++ b/elf/dl-load.h
> @@ -74,7 +74,7 @@ ELF_PREFERRED_ADDRESS_DATA;
> Its details have been expanded out and converted. */
> struct loadcmd
> {
> - ElfW(Addr) mapstart, mapend, dataend, allocend;
> + ElfW(Addr) mapstart, mapend, dataend, allocend, mapalign;
> ElfW(Off) mapoff;
> int prot; /* PROT_* bits. */
> };
> diff --git a/elf/dl-map-segments.h b/elf/dl-map-segments.h
> index f9fb110ee3..f710cf2e48 100644
> --- a/elf/dl-map-segments.h
> +++ b/elf/dl-map-segments.h
> @@ -18,6 +18,50 @@
>
> #include <dl-load.h>
>
> +/* Map a segment and align it properly. */
> +
> +static __always_inline ElfW(Addr)
> +_dl_map_segment (const struct loadcmd *c, ElfW(Addr) mappref,
> + const size_t maplength, int fd)
> +{
> + if (__glibc_likely (c->mapalign <= GLRO(dl_pagesize)))
> + return (ElfW(Addr)) __mmap ((void *) mappref, maplength, c->prot,
> + MAP_COPY|MAP_FILE, fd, c->mapoff);
> +
> + /* If the segment alignment > the page size, allocate enough space to
> + ensure that the segment can be properly aligned. */
> + ElfW(Addr) maplen = (maplength >= c->mapalign
> + ? (maplength + c->mapalign)
> + : (2 * c->mapalign));
> + ElfW(Addr) map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplen,
> + PROT_NONE,
> + MAP_ANONYMOUS|MAP_PRIVATE,
> + -1, 0);
> + if (__glibc_unlikely ((void *) map_start == MAP_FAILED))
> + return map_start;
> +
> + ElfW(Addr) map_start_aligned = ALIGN_UP (map_start, c->mapalign);
> + map_start_aligned = (ElfW(Addr)) __mmap ((void *) map_start_aligned,
> + maplength, c->prot,
> + MAP_COPY|MAP_FILE|MAP_FIXED,
> + fd, c->mapoff);
> + if (__glibc_unlikely ((void *) map_start == MAP_FAILED))
Is map_start_aligned here?
> + __munmap ((void *) map_start, maplen);
> + else
> + {
> + /* Unmap the unused regions. */
> + ElfW(Addr) delta = map_start_aligned - map_start;
> + if (delta)
> + __munmap ((void *) map_start, delta);
> + ElfW(Addr) map_end = map_start_aligned + maplength;
> + delta = map_start + maplen - map_end;
> + if (delta)
> + __munmap ((void *) map_end, delta);
> + }
> +
> + return map_start_aligned;
> +}
> +
> /* This implementation assumes (as does the corresponding implementation
> of _dl_unmap_segments, in dl-unmap-segments.h) that shared objects
> are always laid out with all segments contiguous (or with gaps
> @@ -53,10 +97,7 @@ _dl_map_segments (struct link_map *l, int fd,
> - MAP_BASE_ADDR (l));
>
> /* Remember which part of the address space this object uses. */
> - l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength,
> - c->prot,
> - MAP_COPY|MAP_FILE,
> - fd, c->mapoff);
> + l->l_map_start = _dl_map_segment (c, mappref, maplength, fd);
> if (__glibc_unlikely ((void *) l->l_map_start == MAP_FAILED))
> return DL_MAP_SEGMENTS_ERROR_MAP_SEGMENT;
>
>
All is OK for me.
And I am not sure whether I need to add a testcase in one new patch,
because I saw you had submitted the testcase:
https://gitlab.com/x86-glibc/glibc/-/commit/ba63c1cb2ea1b0ae421f66756771ff4c89d4a470
I will test it again, and send out PATCH v5.
Thanks.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v4] elf: Properly align PT_LOAD segments [BZ #28676]
2021-12-10 1:58 ` Rongwei Wang
@ 2021-12-10 2:24 ` H.J. Lu
2021-12-10 2:34 ` Rongwei Wang
0 siblings, 1 reply; 10+ messages in thread
From: H.J. Lu @ 2021-12-10 2:24 UTC (permalink / raw)
To: Rongwei Wang; +Cc: GNU C Library, Florian Weimer, xuyu, gavin.dg
On Thu, Dec 9, 2021 at 5:58 PM Rongwei Wang
<rongwei.wang@linux.alibaba.com> wrote:
>
>
> Hi, lhj
>
> Thanks for your help! It's much better than my version.
> And I find a small bug. you can check it again if available.
>
> On 12/10/21 3:29 AM, H.J. Lu wrote:
> > On Thu, Dec 09, 2021 at 10:04:32AM -0800, H.J. Lu wrote:
> >> On Thu, Dec 09, 2021 at 07:14:10AM -0800, H.J. Lu wrote:
> >>
> >> Hi,
> >>
> >> I updated your patch. Please take a look.
> >>
> >>
> >> H.J.
> >> --
> >> When PT_LOAD segment alignment > the page size, allocate enough space
> >> to ensure that the segment can be properly aligned.
> >>
> >> This fixes [BZ #28676].
> >
> >
> > H.J.
> > ---
> > Changes in v4:
> >
> > 1. Call unmap when the second mmap fails.
> >
> > When PT_LOAD segment alignment > the page size, allocate enough space
> > to ensure that the segment can be properly aligned.
> >
> > This fixes [BZ #28676].
> > ---
> > elf/dl-load.c | 1 +
> > elf/dl-load.h | 2 +-
> > elf/dl-map-segments.h | 49 +++++++++++++++++++++++++++++++++++++++----
> > 3 files changed, 47 insertions(+), 5 deletions(-)
> >
> > diff --git a/elf/dl-load.c b/elf/dl-load.c
> > index bf8957e73c..9a23590bf4 100644
> > --- a/elf/dl-load.c
> > +++ b/elf/dl-load.c
> > @@ -1150,6 +1150,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd,
> > c->mapend = ALIGN_UP (ph->p_vaddr + ph->p_filesz, GLRO(dl_pagesize));
> > c->dataend = ph->p_vaddr + ph->p_filesz;
> > c->allocend = ph->p_vaddr + ph->p_memsz;
> > + c->mapalign = ph->p_align;
> > c->mapoff = ALIGN_DOWN (ph->p_offset, GLRO(dl_pagesize));
> >
> > /* Determine whether there is a gap between the last segment
> > diff --git a/elf/dl-load.h b/elf/dl-load.h
> > index e329d49a81..c121e3456c 100644
> > --- a/elf/dl-load.h
> > +++ b/elf/dl-load.h
> > @@ -74,7 +74,7 @@ ELF_PREFERRED_ADDRESS_DATA;
> > Its details have been expanded out and converted. */
> > struct loadcmd
> > {
> > - ElfW(Addr) mapstart, mapend, dataend, allocend;
> > + ElfW(Addr) mapstart, mapend, dataend, allocend, mapalign;
> > ElfW(Off) mapoff;
> > int prot; /* PROT_* bits. */
> > };
> > diff --git a/elf/dl-map-segments.h b/elf/dl-map-segments.h
> > index f9fb110ee3..f710cf2e48 100644
> > --- a/elf/dl-map-segments.h
> > +++ b/elf/dl-map-segments.h
> > @@ -18,6 +18,50 @@
> >
> > #include <dl-load.h>
> >
> > +/* Map a segment and align it properly. */
> > +
> > +static __always_inline ElfW(Addr)
> > +_dl_map_segment (const struct loadcmd *c, ElfW(Addr) mappref,
> > + const size_t maplength, int fd)
> > +{
> > + if (__glibc_likely (c->mapalign <= GLRO(dl_pagesize)))
> > + return (ElfW(Addr)) __mmap ((void *) mappref, maplength, c->prot,
> > + MAP_COPY|MAP_FILE, fd, c->mapoff);
> > +
> > + /* If the segment alignment > the page size, allocate enough space to
> > + ensure that the segment can be properly aligned. */
> > + ElfW(Addr) maplen = (maplength >= c->mapalign
> > + ? (maplength + c->mapalign)
> > + : (2 * c->mapalign));
> > + ElfW(Addr) map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplen,
> > + PROT_NONE,
> > + MAP_ANONYMOUS|MAP_PRIVATE,
> > + -1, 0);
> > + if (__glibc_unlikely ((void *) map_start == MAP_FAILED))
> > + return map_start;
> > +
> > + ElfW(Addr) map_start_aligned = ALIGN_UP (map_start, c->mapalign);
> > + map_start_aligned = (ElfW(Addr)) __mmap ((void *) map_start_aligned,
> > + maplength, c->prot,
> > + MAP_COPY|MAP_FILE|MAP_FIXED,
> > + fd, c->mapoff);
> > + if (__glibc_unlikely ((void *) map_start == MAP_FAILED))
> Is map_start_aligned here?
>
> > + __munmap ((void *) map_start, maplen);
> > + else
> > + {
> > + /* Unmap the unused regions. */
> > + ElfW(Addr) delta = map_start_aligned - map_start;
> > + if (delta)
> > + __munmap ((void *) map_start, delta);
> > + ElfW(Addr) map_end = map_start_aligned + maplength;
> > + delta = map_start + maplen - map_end;
> > + if (delta)
> > + __munmap ((void *) map_end, delta);
> > + }
> > +
> > + return map_start_aligned;
> > +}
> > +
> > /* This implementation assumes (as does the corresponding implementation
> > of _dl_unmap_segments, in dl-unmap-segments.h) that shared objects
> > are always laid out with all segments contiguous (or with gaps
> > @@ -53,10 +97,7 @@ _dl_map_segments (struct link_map *l, int fd,
> > - MAP_BASE_ADDR (l));
> >
> > /* Remember which part of the address space this object uses. */
> > - l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength,
> > - c->prot,
> > - MAP_COPY|MAP_FILE,
> > - fd, c->mapoff);
> > + l->l_map_start = _dl_map_segment (c, mappref, maplength, fd);
> > if (__glibc_unlikely ((void *) l->l_map_start == MAP_FAILED))
> > return DL_MAP_SEGMENTS_ERROR_MAP_SEGMENT;
> >
> >
> All is OK for me.
>
> And I am not sure whether I need to add a testcase in one new patch,
> because I saw you had submitted the testcase:
> https://gitlab.com/x86-glibc/glibc/-/commit/ba63c1cb2ea1b0ae421f66756771ff4c89d4a470
No, I didn't submit it.
> I will test it again, and send out PATCH v5.
Please include my testcase in your v5 patch.
Thanks.
--
H.J.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v4] elf: Properly align PT_LOAD segments [BZ #28676]
2021-12-10 2:24 ` H.J. Lu
@ 2021-12-10 2:34 ` Rongwei Wang
0 siblings, 0 replies; 10+ messages in thread
From: Rongwei Wang @ 2021-12-10 2:34 UTC (permalink / raw)
To: H.J. Lu; +Cc: GNU C Library, Florian Weimer, xuyu, gavin.dg
On 12/10/21 10:24 AM, H.J. Lu wrote:
> On Thu, Dec 9, 2021 at 5:58 PM Rongwei Wang
> <rongwei.wang@linux.alibaba.com> wrote:
>>
>>
>> Hi, lhj
>>
>> Thanks for your help! It's much better than my version.
>> And I find a small bug. you can check it again if available.
>>
>> On 12/10/21 3:29 AM, H.J. Lu wrote:
>>> On Thu, Dec 09, 2021 at 10:04:32AM -0800, H.J. Lu wrote:
>>>> On Thu, Dec 09, 2021 at 07:14:10AM -0800, H.J. Lu wrote:
>>>>
>>>> Hi,
>>>>
>>>> I updated your patch. Please take a look.
>>>>
>>>>
>>>> H.J.
>>>> --
>>>> When PT_LOAD segment alignment > the page size, allocate enough space
>>>> to ensure that the segment can be properly aligned.
>>>>
>>>> This fixes [BZ #28676].
>>>
>>>
>>> H.J.
>>> ---
>>> Changes in v4:
>>>
>>> 1. Call unmap when the second mmap fails.
>>>
>>> When PT_LOAD segment alignment > the page size, allocate enough space
>>> to ensure that the segment can be properly aligned.
>>>
>>> This fixes [BZ #28676].
>>> ---
>>> elf/dl-load.c | 1 +
>>> elf/dl-load.h | 2 +-
>>> elf/dl-map-segments.h | 49 +++++++++++++++++++++++++++++++++++++++----
>>> 3 files changed, 47 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/elf/dl-load.c b/elf/dl-load.c
>>> index bf8957e73c..9a23590bf4 100644
>>> --- a/elf/dl-load.c
>>> +++ b/elf/dl-load.c
>>> @@ -1150,6 +1150,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd,
>>> c->mapend = ALIGN_UP (ph->p_vaddr + ph->p_filesz, GLRO(dl_pagesize));
>>> c->dataend = ph->p_vaddr + ph->p_filesz;
>>> c->allocend = ph->p_vaddr + ph->p_memsz;
>>> + c->mapalign = ph->p_align;
>>> c->mapoff = ALIGN_DOWN (ph->p_offset, GLRO(dl_pagesize));
>>>
>>> /* Determine whether there is a gap between the last segment
>>> diff --git a/elf/dl-load.h b/elf/dl-load.h
>>> index e329d49a81..c121e3456c 100644
>>> --- a/elf/dl-load.h
>>> +++ b/elf/dl-load.h
>>> @@ -74,7 +74,7 @@ ELF_PREFERRED_ADDRESS_DATA;
>>> Its details have been expanded out and converted. */
>>> struct loadcmd
>>> {
>>> - ElfW(Addr) mapstart, mapend, dataend, allocend;
>>> + ElfW(Addr) mapstart, mapend, dataend, allocend, mapalign;
>>> ElfW(Off) mapoff;
>>> int prot; /* PROT_* bits. */
>>> };
>>> diff --git a/elf/dl-map-segments.h b/elf/dl-map-segments.h
>>> index f9fb110ee3..f710cf2e48 100644
>>> --- a/elf/dl-map-segments.h
>>> +++ b/elf/dl-map-segments.h
>>> @@ -18,6 +18,50 @@
>>>
>>> #include <dl-load.h>
>>>
>>> +/* Map a segment and align it properly. */
>>> +
>>> +static __always_inline ElfW(Addr)
>>> +_dl_map_segment (const struct loadcmd *c, ElfW(Addr) mappref,
>>> + const size_t maplength, int fd)
>>> +{
>>> + if (__glibc_likely (c->mapalign <= GLRO(dl_pagesize)))
>>> + return (ElfW(Addr)) __mmap ((void *) mappref, maplength, c->prot,
>>> + MAP_COPY|MAP_FILE, fd, c->mapoff);
>>> +
>>> + /* If the segment alignment > the page size, allocate enough space to
>>> + ensure that the segment can be properly aligned. */
>>> + ElfW(Addr) maplen = (maplength >= c->mapalign
>>> + ? (maplength + c->mapalign)
>>> + : (2 * c->mapalign));
>>> + ElfW(Addr) map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplen,
>>> + PROT_NONE,
>>> + MAP_ANONYMOUS|MAP_PRIVATE,
>>> + -1, 0);
>>> + if (__glibc_unlikely ((void *) map_start == MAP_FAILED))
>>> + return map_start;
>>> +
>>> + ElfW(Addr) map_start_aligned = ALIGN_UP (map_start, c->mapalign);
>>> + map_start_aligned = (ElfW(Addr)) __mmap ((void *) map_start_aligned,
>>> + maplength, c->prot,
>>> + MAP_COPY|MAP_FILE|MAP_FIXED,
>>> + fd, c->mapoff);
>>> + if (__glibc_unlikely ((void *) map_start == MAP_FAILED))
>> Is map_start_aligned here?
>>
>>> + __munmap ((void *) map_start, maplen);
>>> + else
>>> + {
>>> + /* Unmap the unused regions. */
>>> + ElfW(Addr) delta = map_start_aligned - map_start;
>>> + if (delta)
>>> + __munmap ((void *) map_start, delta);
>>> + ElfW(Addr) map_end = map_start_aligned + maplength;
>>> + delta = map_start + maplen - map_end;
>>> + if (delta)
>>> + __munmap ((void *) map_end, delta);
>>> + }
>>> +
>>> + return map_start_aligned;
>>> +}
>>> +
>>> /* This implementation assumes (as does the corresponding implementation
>>> of _dl_unmap_segments, in dl-unmap-segments.h) that shared objects
>>> are always laid out with all segments contiguous (or with gaps
>>> @@ -53,10 +97,7 @@ _dl_map_segments (struct link_map *l, int fd,
>>> - MAP_BASE_ADDR (l));
>>>
>>> /* Remember which part of the address space this object uses. */
>>> - l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength,
>>> - c->prot,
>>> - MAP_COPY|MAP_FILE,
>>> - fd, c->mapoff);
>>> + l->l_map_start = _dl_map_segment (c, mappref, maplength, fd);
>>> if (__glibc_unlikely ((void *) l->l_map_start == MAP_FAILED))
>>> return DL_MAP_SEGMENTS_ERROR_MAP_SEGMENT;
>>>
>>>
>> All is OK for me.
>>
>> And I am not sure whether I need to add a testcase in one new patch,
>> because I saw you had submitted the testcase:
>> https://gitlab.com/x86-glibc/glibc/-/commit/ba63c1cb2ea1b0ae421f66756771ff4c89d4a470
>
> No, I didn't submit it.
>
>> I will test it again, and send out PATCH v5.
>
> Please include my testcase in your v5 patch.
Ok
Thanks.
>
> Thanks.
>
>
> --
> H.J.
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v2 0/1] fix p_align on PT_LOAD segment in DSO isn't honored
@ 2021-12-09 5:49 Rongwei Wang
0 siblings, 0 replies; 10+ messages in thread
From: Rongwei Wang @ 2021-12-09 5:49 UTC (permalink / raw)
To: libc-alpha, hjl.tools, fweimer; +Cc: xuyu, gavin.dg
Hi
This patch mainly to fix a reported bug:
"p_align on PT_LOAD segment in DSO isn't honored"
https://sourceware.org/bugzilla/show_bug.cgi?id=28676
A testcase had been builded by H.J.Lu:
https://sourceware.org/bugzilla/attachment.cgi?id=13838
And in Patch 1/1, I also show a simple testcase which
modified from H.J.Lu.
And a similar bug in ELF binary was also reported:
https://bugzilla.kernel.org/show_bug.cgi?id=215275
A related fix patch could been found:
https://lore.kernel.org/linux-mm/20211009092658.59665-4-rongwei.wang@linux.alibaba.com/
Originally, we send this patch for introducing .text
hugepages, was not aware of it's bug. And now, I am
not sure whether kernel maintainer will regards it as
a bug. But it deserved try again.
Thanks.
Changelog:
RFC/v1 -> v2
- Patch "elf: align the mapping address of LOAD segments with p_align"
fix coding format and add testcase in commit.
RFC link:
https://patchwork.sourceware.org/project/glibc/patch/20211204045848.71105-2-rongwei.wang@linux.alibaba.com/
Rongwei Wang (1):
elf: align the mapping address of LOAD segments with p_align
elf/dl-load.c | 1 +
elf/dl-map-segments.h | 63 +++++++++++++++++++++++++++++++++++++++----
include/link.h | 3 +++
3 files changed, 62 insertions(+), 5 deletions(-)
--
2.27.0
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2021-12-10 2:34 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-09 5:57 [PATCH v2 0/1] fix p_align on PT_LOAD segment in DSO isn't honored Rongwei Wang
2021-12-09 5:57 ` [PATCH v2 1/1] elf: align the mapping address of LOAD segments with p_align Rongwei Wang
2021-12-09 15:14 ` H.J. Lu
2021-12-09 18:04 ` [PATCH v3] elf: Properly align PT_LOAD segments [BZ #28676] H.J. Lu
2021-12-09 19:29 ` [PATCH v4] " H.J. Lu
2021-12-10 1:58 ` Rongwei Wang
2021-12-10 2:24 ` H.J. Lu
2021-12-10 2:34 ` Rongwei Wang
2021-12-09 6:36 ` [PATCH v2 0/1] fix p_align on PT_LOAD segment in DSO isn't honored Rongwei Wang
-- strict thread matches above, loose matches on Subject: below --
2021-12-09 5:49 Rongwei Wang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).