* [PATCH RFC 0/1] make ld.so map .text LOAD ssegments and aligned by p_align @ 2021-12-04 4:58 Rongwei Wang 2021-12-04 4:58 ` [PATCH RFC 1/1] elf: align the mapping address of LOAD segments with p_align Rongwei Wang ` (2 more replies) 0 siblings, 3 replies; 42+ messages in thread From: Rongwei Wang @ 2021-12-04 4:58 UTC (permalink / raw) To: libc-alpha; +Cc: xuyu, gavin.dg Hello, Glibc Maintainers and Developers: We are software development engineers from Alibaba Cloud OS team. We are mainly responsible for the development and maintenance of Alibaba Cloud Linux. This is our first time to send patch to glibc mail-list, if we ignore anything in this patch, please let me know. Thanks very much! Recently, we try to realize the feature that make .text and .data application segments use huge pages (2M). At first, we had realized the feature in kernel, refer to LINK[1]. But the part that align the mapping address of LOAD segment was regarded as hack way, if realized in kernel. If .text wants to use huge pages, alignment of mapping address is necessary. It refers to the first LOAD segment here. So, we think it would be better to realize this part in glibc than kernel. Actually, related discussion had been proposed in LINK[2, 3]. The implementation method in Patch 1/1 just need ld a simple support, then application can call 'madvise' to mark .text pages and collapse they into huge pages by 'khugepaged' thread. (And We make a plan to provide a global sysfs inteface in OS for .text hugepages, realizing transparent .text hugepages. If you have an interested in this, a simple code works well in LINK[1].) What's more, we can see LINK[3] had mentioned .data and .text segment using huge pages. In our patch, we just realizes the .text part (first LOAD segment) and we think it's not necessary for .data segments to use huge pages because of small and no performance benefit in most applications. This patch mainly move the part that aligns the mapping address into ld.so. And the rest that realizes huge pages of the .text is handed over to OS. LINK: [1] first patch in linux kernel: https://patchwork.kernel.org/project/linux-mm/cover/20211009092658.59665-1-rongwei.wang@linux.alibaba.com/ [2] https://libc-alpha.sourceware.narkive.com/zqNSW0dY/glibc-loading-of-shared-objects-with-holes-wastes-address-space [3] https://sourceware.org/pipermail/libc-alpha/2021-February/122331.html Best regards, Rongwei Wang Rongwei Wang (1): elf: align the mapping address of LOAD segments with p_align elf/dl-load.c | 1 + elf/dl-map-segments.h | 60 ++++++++++++++++++++++++++++++++++++++++--- include/link.h | 3 +++ 3 files changed, 60 insertions(+), 4 deletions(-) -- 2.27.0 ^ permalink raw reply [flat|nested] 42+ messages in thread
* [PATCH RFC 1/1] elf: align the mapping address of LOAD segments with p_align 2021-12-04 4:58 [PATCH RFC 0/1] make ld.so map .text LOAD ssegments and aligned by p_align Rongwei Wang @ 2021-12-04 4:58 ` Rongwei Wang 2021-12-04 18:10 ` Florian Weimer 2021-12-06 14:48 ` H.J. Lu 2021-12-10 12:39 ` [PATCH v5 0/2] fix p_align on PT_LOAD segment in DSO isn't honored Rongwei Wang 2021-12-13 2:51 ` [PATCH v6 " Rongwei Wang 2 siblings, 2 replies; 42+ messages in thread From: Rongwei Wang @ 2021-12-04 4:58 UTC (permalink / raw) To: libc-alpha; +Cc: xuyu, gavin.dg Now, ld.so always map the LOAD segments and aligned by base page size (e.g. 4k in x86 or 4k, 16k and 64k in arm64). And this patch improve the scheme here. In this patch, ld.so can align the mapping address of the first LOAD segment with p_align when p_align is greater than the current base page size. And this change makes code segments using huge pages become simple and available. Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com> --- elf/dl-load.c | 1 + elf/dl-map-segments.h | 54 +++++++++++++++++++++++++++++++++++++++++-- include/link.h | 3 +++ 3 files changed, 56 insertions(+), 2 deletions(-) diff --git a/elf/dl-load.c b/elf/dl-load.c index e39980fb19..136cfe2fa8 100644 --- a/elf/dl-load.c +++ b/elf/dl-load.c @@ -1154,6 +1154,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd, c->dataend = ph->p_vaddr + ph->p_filesz; c->allocend = ph->p_vaddr + ph->p_memsz; c->mapoff = ALIGN_DOWN (ph->p_offset, GLRO(dl_pagesize)); + l->l_load_align = ph->p_align; /* Determine whether there is a gap between the last segment and this one. */ diff --git a/elf/dl-map-segments.h b/elf/dl-map-segments.h index ac9f09ab4c..ae03236045 100644 --- a/elf/dl-map-segments.h +++ b/elf/dl-map-segments.h @@ -18,6 +18,47 @@ #include <dl-load.h> +static __always_inline void * +_dl_map_segments_align (const struct loadcmd *c, + ElfW(Addr) mappref, int fd, size_t alignment, + const size_t maplength) +{ + unsigned long map_start, map_start_align, map_end; + unsigned long maplen = (maplength >= alignment) ? + (maplength + alignment) : (2 * alignment); + + /* Allocate enough space to ensure that address aligned by + p_align is included. */ + map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplen, + PROT_NONE, + MAP_ANONYMOUS | MAP_PRIVATE, + -1, 0); + if (__glibc_unlikely ((void *) map_start == MAP_FAILED)) { + /* If mapping a aligned address failed, then ... */ + map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, + c->prot, + MAP_COPY|MAP_FILE, + fd, c->mapoff); + + return (void *) map_start; + } + map_start_align = ALIGN_UP(map_start, alignment); + map_end = map_start_align + maplength; + + /* Remember which part of the address space this object uses. */ + map_start_align = (ElfW(Addr)) __mmap ((void *) map_start_align, maplength, + c->prot, + MAP_COPY|MAP_FILE|MAP_FIXED, + fd, c->mapoff); + if (__glibc_unlikely ((void *) map_start_align == MAP_FAILED)) + return MAP_FAILED; + if (map_start_align > map_start) + __munmap((void *)map_start, map_start_align - map_start); + __munmap((void *)map_end, map_start + maplen - map_end); + + return (void *) map_start_align; +} + /* This implementation assumes (as does the corresponding implementation of _dl_unmap_segments, in dl-unmap-segments.h) that shared objects are always laid out with all segments contiguous (or with gaps @@ -52,11 +93,20 @@ _dl_map_segments (struct link_map *l, int fd, c->mapstart & GLRO(dl_use_load_bias)) - MAP_BASE_ADDR (l)); - /* Remember which part of the address space this object uses. */ - l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, + /* During mapping, align the mapping address of the LOAD segments + according to own p_align. This helps OS map its code segment to + huge pages. */ + if (l->l_load_align > GLRO(dl_pagesize)) { + l->l_map_start = (ElfW(Addr)) _dl_map_segments_align (c, + mappref, fd, + l->l_load_align, maplength); + } else { + /* Remember which part of the address space this object uses. */ + l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, c->prot, MAP_COPY|MAP_FILE, fd, c->mapoff); + } if (__glibc_unlikely ((void *) l->l_map_start == MAP_FAILED)) return DL_MAP_SEGMENTS_ERROR_MAP_SEGMENT; diff --git a/include/link.h b/include/link.h index aea268439c..fc6ce29fab 100644 --- a/include/link.h +++ b/include/link.h @@ -298,6 +298,9 @@ struct link_map /* Thread-local storage related info. */ + /* Alignment requirement of the LOAD block. */ + size_t l_load_align; + /* Start of the initialization image. */ void *l_tls_initimage; /* Size of the initialization image. */ -- 2.27.0 ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH RFC 1/1] elf: align the mapping address of LOAD segments with p_align 2021-12-04 4:58 ` [PATCH RFC 1/1] elf: align the mapping address of LOAD segments with p_align Rongwei Wang @ 2021-12-04 18:10 ` Florian Weimer 2021-12-06 2:47 ` Rongwei Wang 2021-12-06 14:48 ` H.J. Lu 1 sibling, 1 reply; 42+ messages in thread From: Florian Weimer @ 2021-12-04 18:10 UTC (permalink / raw) To: Rongwei Wang via Libc-alpha; +Cc: Rongwei Wang, xuyu, gavin.dg * Rongwei Wang via Libc-alpha: > Now, ld.so always map the LOAD segments and aligned by base > page size (e.g. 4k in x86 or 4k, 16k and 64k in arm64). And > this patch improve the scheme here. In this patch, ld.so > can align the mapping address of the first LOAD segment with > p_align when p_align is greater than the current base page > size. > > And this change makes code segments using huge pages become > simple and available. I feel like we should be able to tell the kernel that we want an aligned mapping. munmap is not actually cheap. Do you know if there are some ideas in this area? Perhaps with another MAP_ flag and encoding the requested alignment in the lower bits of the hints address? Thanks, Florian ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH RFC 1/1] elf: align the mapping address of LOAD segments with p_align 2021-12-04 18:10 ` Florian Weimer @ 2021-12-06 2:47 ` Rongwei Wang 0 siblings, 0 replies; 42+ messages in thread From: Rongwei Wang @ 2021-12-06 2:47 UTC (permalink / raw) To: Florian Weimer, Rongwei Wang via Libc-alpha; +Cc: xuyu, gavin.dg Hi, Florian On 12/5/21 2:10 AM, Florian Weimer wrote: > * Rongwei Wang via Libc-alpha: > >> Now, ld.so always map the LOAD segments and aligned by base >> page size (e.g. 4k in x86 or 4k, 16k and 64k in arm64). And >> this patch improve the scheme here. In this patch, ld.so >> can align the mapping address of the first LOAD segment with >> p_align when p_align is greater than the current base page >> size. >> >> And this change makes code segments using huge pages become >> simple and available. > > I feel like we should be able to tell the kernel that we want an aligned > mapping. munmap is not actually cheap. I see munmap here will slow down applications linking to DSO. And the effect seems small? Sorry, I don't know much about this impact. Of course, I know that this effect should be magnified if the applications has too many DSOs. > > Do you know if there are some ideas in this area? Perhaps with another As far as I can tell, there are two methods to make .text use huge pages: 1. The way above mentioned: madvise + khugepaged; 2. use libhugetlbfs[1]: this way need to recompile the source code of application, and linking libhugetlbfs.so. The abvious difference is this way uses hugetlb. For application, the two methods mentioned above which seems not friendly to applications. > MAP_ flag and encoding the requested alignment in the lower bits of the Actually, It seems the kernel developers don't want to introduce new MAP_ flags to support this. Maybe they think mmap and munmap is enough for applications to get an aligned mapping address. And of course, it is just my guess and self point. In additions, back to the patch, what do you think of the alignment method used in our patch? There would be 4k, 64k and 2M to align the mapping in our patch, according to p_align value. If we try to introduce new MAP_ flags in kernel, I am not sure whether we need to add multiple flags, for example MAP_64K, MAP_2M, etc. LINK[1]: https://github.com/libhugetlbfs/libhugetlbfs Thanks! > hints address? > > Thanks, > Florian > ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH RFC 1/1] elf: align the mapping address of LOAD segments with p_align 2021-12-04 4:58 ` [PATCH RFC 1/1] elf: align the mapping address of LOAD segments with p_align Rongwei Wang 2021-12-04 18:10 ` Florian Weimer @ 2021-12-06 14:48 ` H.J. Lu 2021-12-08 2:14 ` Rongwei Wang 1 sibling, 1 reply; 42+ messages in thread From: H.J. Lu @ 2021-12-06 14:48 UTC (permalink / raw) To: Rongwei Wang; +Cc: GNU C Library, xuyu, gavin.dg On Fri, Dec 3, 2021 at 9:00 PM Rongwei Wang via Libc-alpha <libc-alpha@sourceware.org> wrote: > > Now, ld.so always map the LOAD segments and aligned by base > page size (e.g. 4k in x86 or 4k, 16k and 64k in arm64). And > this patch improve the scheme here. In this patch, ld.so > can align the mapping address of the first LOAD segment with > p_align when p_align is greater than the current base page > size. This is a bug fix. Please open a glibc bug: https://sourceware.org/bugzilla/enter_bug.cgi with a testcase which should align variables to 2MB in the main program and a shared library. Please include the testcase in your patch and mention the glibc bug in your commit message. > And this change makes code segments using huge pages become > simple and available. > > Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com> > --- > elf/dl-load.c | 1 + > elf/dl-map-segments.h | 54 +++++++++++++++++++++++++++++++++++++++++-- > include/link.h | 3 +++ > 3 files changed, 56 insertions(+), 2 deletions(-) > > diff --git a/elf/dl-load.c b/elf/dl-load.c > index e39980fb19..136cfe2fa8 100644 > --- a/elf/dl-load.c > +++ b/elf/dl-load.c > @@ -1154,6 +1154,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd, > c->dataend = ph->p_vaddr + ph->p_filesz; > c->allocend = ph->p_vaddr + ph->p_memsz; > c->mapoff = ALIGN_DOWN (ph->p_offset, GLRO(dl_pagesize)); > + l->l_load_align = ph->p_align; > > /* Determine whether there is a gap between the last segment > and this one. */ > diff --git a/elf/dl-map-segments.h b/elf/dl-map-segments.h > index ac9f09ab4c..ae03236045 100644 > --- a/elf/dl-map-segments.h > +++ b/elf/dl-map-segments.h > @@ -18,6 +18,47 @@ > > #include <dl-load.h> > > +static __always_inline void * > +_dl_map_segments_align (const struct loadcmd *c, > + ElfW(Addr) mappref, int fd, size_t alignment, > + const size_t maplength) > +{ > + unsigned long map_start, map_start_align, map_end; > + unsigned long maplen = (maplength >= alignment) ? > + (maplength + alignment) : (2 * alignment); > + > + /* Allocate enough space to ensure that address aligned by > + p_align is included. */ > + map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplen, > + PROT_NONE, > + MAP_ANONYMOUS | MAP_PRIVATE, > + -1, 0); > + if (__glibc_unlikely ((void *) map_start == MAP_FAILED)) { > + /* If mapping a aligned address failed, then ... */ > + map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, > + c->prot, > + MAP_COPY|MAP_FILE, > + fd, c->mapoff); > + > + return (void *) map_start; > + } > + map_start_align = ALIGN_UP(map_start, alignment); > + map_end = map_start_align + maplength; > + > + /* Remember which part of the address space this object uses. */ > + map_start_align = (ElfW(Addr)) __mmap ((void *) map_start_align, maplength, > + c->prot, > + MAP_COPY|MAP_FILE|MAP_FIXED, > + fd, c->mapoff); > + if (__glibc_unlikely ((void *) map_start_align == MAP_FAILED)) > + return MAP_FAILED; > + if (map_start_align > map_start) > + __munmap((void *)map_start, map_start_align - map_start); > + __munmap((void *)map_end, map_start + maplen - map_end); > + > + return (void *) map_start_align; > +} > + Please follow the glibc coding format. > /* This implementation assumes (as does the corresponding implementation > of _dl_unmap_segments, in dl-unmap-segments.h) that shared objects > are always laid out with all segments contiguous (or with gaps > @@ -52,11 +93,20 @@ _dl_map_segments (struct link_map *l, int fd, > c->mapstart & GLRO(dl_use_load_bias)) > - MAP_BASE_ADDR (l)); > > - /* Remember which part of the address space this object uses. */ > - l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, > + /* During mapping, align the mapping address of the LOAD segments > + according to own p_align. This helps OS map its code segment to > + huge pages. */ > + if (l->l_load_align > GLRO(dl_pagesize)) { > + l->l_map_start = (ElfW(Addr)) _dl_map_segments_align (c, > + mappref, fd, > + l->l_load_align, maplength); > + } else { > + /* Remember which part of the address space this object uses. */ > + l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, > c->prot, > MAP_COPY|MAP_FILE, > fd, c->mapoff); Please follow the glibc coding format. > + } > if (__glibc_unlikely ((void *) l->l_map_start == MAP_FAILED)) > return DL_MAP_SEGMENTS_ERROR_MAP_SEGMENT; > > diff --git a/include/link.h b/include/link.h > index aea268439c..fc6ce29fab 100644 > --- a/include/link.h > +++ b/include/link.h > @@ -298,6 +298,9 @@ struct link_map > > /* Thread-local storage related info. */ > > + /* Alignment requirement of the LOAD block. */ > + size_t l_load_align; > + > /* Start of the initialization image. */ > void *l_tls_initimage; > /* Size of the initialization image. */ > -- > 2.27.0 > Thanks. -- H.J. ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH RFC 1/1] elf: align the mapping address of LOAD segments with p_align 2021-12-06 14:48 ` H.J. Lu @ 2021-12-08 2:14 ` Rongwei Wang 2021-12-08 2:33 ` H.J. Lu 0 siblings, 1 reply; 42+ messages in thread From: Rongwei Wang @ 2021-12-08 2:14 UTC (permalink / raw) To: H.J. Lu; +Cc: GNU C Library, xuyu, gavin.dg Hi hjl On 12/6/21 10:48 PM, H.J. Lu wrote: > On Fri, Dec 3, 2021 at 9:00 PM Rongwei Wang via Libc-alpha > <libc-alpha@sourceware.org> wrote: >> >> Now, ld.so always map the LOAD segments and aligned by base >> page size (e.g. 4k in x86 or 4k, 16k and 64k in arm64). And >> this patch improve the scheme here. In this patch, ld.so >> can align the mapping address of the first LOAD segment with >> p_align when p_align is greater than the current base page >> size. > > This is a bug fix. Please open a glibc bug: > > https://sourceware.org/bugzilla/enter_bug.cgiOK And I requesting the account. > > with a testcase which should align variables to 2MB in the main By the way, I have a question about whether we need to align each LOAD segments? In our patch, we only fixed the mapping address for the first LOAD segment. > program and a shared library. Please include the testcase in > your patch and mention the glibc bug in your commit message. > >> And this change makes code segments using huge pages become >> simple and available. >> >> Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com> >> --- >> elf/dl-load.c | 1 + >> elf/dl-map-segments.h | 54 +++++++++++++++++++++++++++++++++++++++++-- >> include/link.h | 3 +++ >> 3 files changed, 56 insertions(+), 2 deletions(-) >> >> diff --git a/elf/dl-load.c b/elf/dl-load.c >> index e39980fb19..136cfe2fa8 100644 >> --- a/elf/dl-load.c >> +++ b/elf/dl-load.c >> @@ -1154,6 +1154,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd, >> c->dataend = ph->p_vaddr + ph->p_filesz; >> c->allocend = ph->p_vaddr + ph->p_memsz; >> c->mapoff = ALIGN_DOWN (ph->p_offset, GLRO(dl_pagesize)); >> + l->l_load_align = ph->p_align; >> >> /* Determine whether there is a gap between the last segment >> and this one. */ >> diff --git a/elf/dl-map-segments.h b/elf/dl-map-segments.h >> index ac9f09ab4c..ae03236045 100644 >> --- a/elf/dl-map-segments.h >> +++ b/elf/dl-map-segments.h >> @@ -18,6 +18,47 @@ >> >> #include <dl-load.h> >> >> +static __always_inline void * >> +_dl_map_segments_align (const struct loadcmd *c, >> + ElfW(Addr) mappref, int fd, size_t alignment, >> + const size_t maplength) >> +{ >> + unsigned long map_start, map_start_align, map_end; >> + unsigned long maplen = (maplength >= alignment) ? >> + (maplength + alignment) : (2 * alignment); >> + >> + /* Allocate enough space to ensure that address aligned by >> + p_align is included. */ >> + map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplen, >> + PROT_NONE, >> + MAP_ANONYMOUS | MAP_PRIVATE, >> + -1, 0); >> + if (__glibc_unlikely ((void *) map_start == MAP_FAILED)) { >> + /* If mapping a aligned address failed, then ... */ >> + map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, >> + c->prot, >> + MAP_COPY|MAP_FILE, >> + fd, c->mapoff); >> + >> + return (void *) map_start; >> + } >> + map_start_align = ALIGN_UP(map_start, alignment); >> + map_end = map_start_align + maplength; >> + >> + /* Remember which part of the address space this object uses. */ >> + map_start_align = (ElfW(Addr)) __mmap ((void *) map_start_align, maplength, >> + c->prot, >> + MAP_COPY|MAP_FILE|MAP_FIXED, >> + fd, c->mapoff); >> + if (__glibc_unlikely ((void *) map_start_align == MAP_FAILED)) >> + return MAP_FAILED; >> + if (map_start_align > map_start) >> + __munmap((void *)map_start, map_start_align - map_start); >> + __munmap((void *)map_end, map_start + maplen - map_end); >> + >> + return (void *) map_start_align; >> +} >> + > > Please follow the glibc coding format. > >> /* This implementation assumes (as does the corresponding implementation >> of _dl_unmap_segments, in dl-unmap-segments.h) that shared objects >> are always laid out with all segments contiguous (or with gaps >> @@ -52,11 +93,20 @@ _dl_map_segments (struct link_map *l, int fd, >> c->mapstart & GLRO(dl_use_load_bias)) >> - MAP_BASE_ADDR (l)); >> >> - /* Remember which part of the address space this object uses. */ >> - l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, >> + /* During mapping, align the mapping address of the LOAD segments >> + according to own p_align. This helps OS map its code segment to >> + huge pages. */ >> + if (l->l_load_align > GLRO(dl_pagesize)) { >> + l->l_map_start = (ElfW(Addr)) _dl_map_segments_align (c, >> + mappref, fd, >> + l->l_load_align, maplength); >> + } else { >> + /* Remember which part of the address space this object uses. */ >> + l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, >> c->prot, >> MAP_COPY|MAP_FILE, >> fd, c->mapoff); > > Please follow the glibc coding format. OK Thanks. > >> + } >> if (__glibc_unlikely ((void *) l->l_map_start == MAP_FAILED)) >> return DL_MAP_SEGMENTS_ERROR_MAP_SEGMENT; >> >> diff --git a/include/link.h b/include/link.h >> index aea268439c..fc6ce29fab 100644 >> --- a/include/link.h >> +++ b/include/link.h >> @@ -298,6 +298,9 @@ struct link_map >> >> /* Thread-local storage related info. */ >> >> + /* Alignment requirement of the LOAD block. */ >> + size_t l_load_align; >> + >> /* Start of the initialization image. */ >> void *l_tls_initimage; >> /* Size of the initialization image. */ >> -- >> 2.27.0 >> > > Thanks. > ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH RFC 1/1] elf: align the mapping address of LOAD segments with p_align 2021-12-08 2:14 ` Rongwei Wang @ 2021-12-08 2:33 ` H.J. Lu 2021-12-08 3:04 ` Rongwei Wang 0 siblings, 1 reply; 42+ messages in thread From: H.J. Lu @ 2021-12-08 2:33 UTC (permalink / raw) To: Rongwei Wang; +Cc: GNU C Library, xuyu, gavin.dg On Tue, Dec 7, 2021 at 6:14 PM Rongwei Wang <rongwei.wang@linux.alibaba.com> wrote: > > Hi hjl > > On 12/6/21 10:48 PM, H.J. Lu wrote: > > On Fri, Dec 3, 2021 at 9:00 PM Rongwei Wang via Libc-alpha > > <libc-alpha@sourceware.org> wrote: > >> > >> Now, ld.so always map the LOAD segments and aligned by base > >> page size (e.g. 4k in x86 or 4k, 16k and 64k in arm64). And > >> this patch improve the scheme here. In this patch, ld.so > >> can align the mapping address of the first LOAD segment with > >> p_align when p_align is greater than the current base page > >> size. > > > > This is a bug fix. Please open a glibc bug: > > > > https://sourceware.org/bugzilla/enter_bug.cgiOK > > And I requesting the account. > > > > with a testcase which should align variables to 2MB in the main > By the way, I have a question about whether we need to align each LOAD > segments? In our patch, we only fixed the mapping address for the first > LOAD segment. I think the first one should be sufficient. You can verify it with a 2MB aligned variable in PIE: [hjl@gnu-cfl-2 tmp]$ cat x.c #include <stdio.h> int foo __attribute__((aligned(0x200000))) = 1; int main () { printf ("foo: %p\n", &foo); } [hjl@gnu-cfl-2 tmp]$ gcc -no-pie x.c [hjl@gnu-cfl-2 tmp]$ ./a.out foo: 0x800000 [hjl@gnu-cfl-2 tmp]$ gcc x.c -fpie -pie [hjl@gnu-cfl-2 tmp]$ ./a.out foo: 0x55c529afe000 [hjl@gnu-cfl-2 tmp]$ > > program and a shared library. Please include the testcase in > > your patch and mention the glibc bug in your commit message. > > > >> And this change makes code segments using huge pages become > >> simple and available. > >> > >> Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com> > >> --- > >> elf/dl-load.c | 1 + > >> elf/dl-map-segments.h | 54 +++++++++++++++++++++++++++++++++++++++++-- > >> include/link.h | 3 +++ > >> 3 files changed, 56 insertions(+), 2 deletions(-) > >> > >> diff --git a/elf/dl-load.c b/elf/dl-load.c > >> index e39980fb19..136cfe2fa8 100644 > >> --- a/elf/dl-load.c > >> +++ b/elf/dl-load.c > >> @@ -1154,6 +1154,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd, > >> c->dataend = ph->p_vaddr + ph->p_filesz; > >> c->allocend = ph->p_vaddr + ph->p_memsz; > >> c->mapoff = ALIGN_DOWN (ph->p_offset, GLRO(dl_pagesize)); > >> + l->l_load_align = ph->p_align; > >> > >> /* Determine whether there is a gap between the last segment > >> and this one. */ > >> diff --git a/elf/dl-map-segments.h b/elf/dl-map-segments.h > >> index ac9f09ab4c..ae03236045 100644 > >> --- a/elf/dl-map-segments.h > >> +++ b/elf/dl-map-segments.h > >> @@ -18,6 +18,47 @@ > >> > >> #include <dl-load.h> > >> > >> +static __always_inline void * > >> +_dl_map_segments_align (const struct loadcmd *c, > >> + ElfW(Addr) mappref, int fd, size_t alignment, > >> + const size_t maplength) > >> +{ > >> + unsigned long map_start, map_start_align, map_end; > >> + unsigned long maplen = (maplength >= alignment) ? > >> + (maplength + alignment) : (2 * alignment); > >> + > >> + /* Allocate enough space to ensure that address aligned by > >> + p_align is included. */ > >> + map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplen, > >> + PROT_NONE, > >> + MAP_ANONYMOUS | MAP_PRIVATE, > >> + -1, 0); > >> + if (__glibc_unlikely ((void *) map_start == MAP_FAILED)) { > >> + /* If mapping a aligned address failed, then ... */ > >> + map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, > >> + c->prot, > >> + MAP_COPY|MAP_FILE, > >> + fd, c->mapoff); > >> + > >> + return (void *) map_start; > >> + } > >> + map_start_align = ALIGN_UP(map_start, alignment); > >> + map_end = map_start_align + maplength; > >> + > >> + /* Remember which part of the address space this object uses. */ > >> + map_start_align = (ElfW(Addr)) __mmap ((void *) map_start_align, maplength, > >> + c->prot, > >> + MAP_COPY|MAP_FILE|MAP_FIXED, > >> + fd, c->mapoff); > >> + if (__glibc_unlikely ((void *) map_start_align == MAP_FAILED)) > >> + return MAP_FAILED; > >> + if (map_start_align > map_start) > >> + __munmap((void *)map_start, map_start_align - map_start); > >> + __munmap((void *)map_end, map_start + maplen - map_end); > >> + > >> + return (void *) map_start_align; > >> +} > >> + > > > > Please follow the glibc coding format. > > > >> /* This implementation assumes (as does the corresponding implementation > >> of _dl_unmap_segments, in dl-unmap-segments.h) that shared objects > >> are always laid out with all segments contiguous (or with gaps > >> @@ -52,11 +93,20 @@ _dl_map_segments (struct link_map *l, int fd, > >> c->mapstart & GLRO(dl_use_load_bias)) > >> - MAP_BASE_ADDR (l)); > >> > >> - /* Remember which part of the address space this object uses. */ > >> - l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, > >> + /* During mapping, align the mapping address of the LOAD segments > >> + according to own p_align. This helps OS map its code segment to > >> + huge pages. */ > >> + if (l->l_load_align > GLRO(dl_pagesize)) { > >> + l->l_map_start = (ElfW(Addr)) _dl_map_segments_align (c, > >> + mappref, fd, > >> + l->l_load_align, maplength); > >> + } else { > >> + /* Remember which part of the address space this object uses. */ > >> + l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, > >> c->prot, > >> MAP_COPY|MAP_FILE, > >> fd, c->mapoff); > > > > Please follow the glibc coding format. > OK > > Thanks. > > > >> + } > >> if (__glibc_unlikely ((void *) l->l_map_start == MAP_FAILED)) > >> return DL_MAP_SEGMENTS_ERROR_MAP_SEGMENT; > >> > >> diff --git a/include/link.h b/include/link.h > >> index aea268439c..fc6ce29fab 100644 > >> --- a/include/link.h > >> +++ b/include/link.h > >> @@ -298,6 +298,9 @@ struct link_map > >> > >> /* Thread-local storage related info. */ > >> > >> + /* Alignment requirement of the LOAD block. */ > >> + size_t l_load_align; > >> + > >> /* Start of the initialization image. */ > >> void *l_tls_initimage; > >> /* Size of the initialization image. */ > >> -- > >> 2.27.0 > >> > > > > Thanks. > > -- H.J. ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH RFC 1/1] elf: align the mapping address of LOAD segments with p_align 2021-12-08 2:33 ` H.J. Lu @ 2021-12-08 3:04 ` Rongwei Wang 2021-12-08 23:52 ` H.J. Lu 0 siblings, 1 reply; 42+ messages in thread From: Rongwei Wang @ 2021-12-08 3:04 UTC (permalink / raw) To: H.J. Lu; +Cc: GNU C Library, xuyu, gavin.dg On 12/8/21 10:33 AM, H.J. Lu wrote: > On Tue, Dec 7, 2021 at 6:14 PM Rongwei Wang > <rongwei.wang@linux.alibaba.com> wrote: >> >> Hi hjl >> >> On 12/6/21 10:48 PM, H.J. Lu wrote: >>> On Fri, Dec 3, 2021 at 9:00 PM Rongwei Wang via Libc-alpha >>> <libc-alpha@sourceware.org> wrote: >>>> >>>> Now, ld.so always map the LOAD segments and aligned by base >>>> page size (e.g. 4k in x86 or 4k, 16k and 64k in arm64). And >>>> this patch improve the scheme here. In this patch, ld.so >>>> can align the mapping address of the first LOAD segment with >>>> p_align when p_align is greater than the current base page >>>> size. >>> >>> This is a bug fix. Please open a glibc bug: >>> >>> https://sourceware.org/bugzilla/enter_bug.cgiOK >> >> And I requesting the account. >>> >>> with a testcase which should align variables to 2MB in the main >> By the way, I have a question about whether we need to align each LOAD >> segments? In our patch, we only fixed the mapping address for the first >> LOAD segment. > > I think the first one should be sufficient. You can verify it with a > 2MB aligned variable in PIE: > > [hjl@gnu-cfl-2 tmp]$ cat x.c > #include <stdio.h> > > int foo __attribute__((aligned(0x200000))) = 1; > > int > main () > { > printf ("foo: %p\n", &foo); > } > [hjl@gnu-cfl-2 tmp]$ gcc -no-pie x.c > [hjl@gnu-cfl-2 tmp]$ ./a.out > foo: 0x800000 > [hjl@gnu-cfl-2 tmp]$ gcc x.c -fpie -pie > [hjl@gnu-cfl-2 tmp]$ ./a.out > foo: 0x55c529afe000 > [hjl@gnu-cfl-2 tmp]$ Learned it! Thanks. > >>> program and a shared library. Please include the testcase in >>> your patch and mention the glibc bug in your commit message. >>> >>>> And this change makes code segments using huge pages become >>>> simple and available. >>>> >>>> Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com> >>>> --- >>>> elf/dl-load.c | 1 + >>>> elf/dl-map-segments.h | 54 +++++++++++++++++++++++++++++++++++++++++-- >>>> include/link.h | 3 +++ >>>> 3 files changed, 56 insertions(+), 2 deletions(-) >>>> >>>> diff --git a/elf/dl-load.c b/elf/dl-load.c >>>> index e39980fb19..136cfe2fa8 100644 >>>> --- a/elf/dl-load.c >>>> +++ b/elf/dl-load.c >>>> @@ -1154,6 +1154,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd, >>>> c->dataend = ph->p_vaddr + ph->p_filesz; >>>> c->allocend = ph->p_vaddr + ph->p_memsz; >>>> c->mapoff = ALIGN_DOWN (ph->p_offset, GLRO(dl_pagesize)); >>>> + l->l_load_align = ph->p_align; >>>> >>>> /* Determine whether there is a gap between the last segment >>>> and this one. */ >>>> diff --git a/elf/dl-map-segments.h b/elf/dl-map-segments.h >>>> index ac9f09ab4c..ae03236045 100644 >>>> --- a/elf/dl-map-segments.h >>>> +++ b/elf/dl-map-segments.h >>>> @@ -18,6 +18,47 @@ >>>> >>>> #include <dl-load.h> >>>> >>>> +static __always_inline void * >>>> +_dl_map_segments_align (const struct loadcmd *c, >>>> + ElfW(Addr) mappref, int fd, size_t alignment, >>>> + const size_t maplength) >>>> +{ >>>> + unsigned long map_start, map_start_align, map_end; >>>> + unsigned long maplen = (maplength >= alignment) ? >>>> + (maplength + alignment) : (2 * alignment); >>>> + >>>> + /* Allocate enough space to ensure that address aligned by >>>> + p_align is included. */ >>>> + map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplen, >>>> + PROT_NONE, >>>> + MAP_ANONYMOUS | MAP_PRIVATE, >>>> + -1, 0); >>>> + if (__glibc_unlikely ((void *) map_start == MAP_FAILED)) { >>>> + /* If mapping a aligned address failed, then ... */ >>>> + map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, >>>> + c->prot, >>>> + MAP_COPY|MAP_FILE, >>>> + fd, c->mapoff); >>>> + >>>> + return (void *) map_start; >>>> + } >>>> + map_start_align = ALIGN_UP(map_start, alignment); >>>> + map_end = map_start_align + maplength; >>>> + >>>> + /* Remember which part of the address space this object uses. */ >>>> + map_start_align = (ElfW(Addr)) __mmap ((void *) map_start_align, maplength, >>>> + c->prot, >>>> + MAP_COPY|MAP_FILE|MAP_FIXED, >>>> + fd, c->mapoff); >>>> + if (__glibc_unlikely ((void *) map_start_align == MAP_FAILED)) >>>> + return MAP_FAILED; >>>> + if (map_start_align > map_start) >>>> + __munmap((void *)map_start, map_start_align - map_start); >>>> + __munmap((void *)map_end, map_start + maplen - map_end); >>>> + >>>> + return (void *) map_start_align; >>>> +} >>>> + >>> >>> Please follow the glibc coding format. >>> >>>> /* This implementation assumes (as does the corresponding implementation >>>> of _dl_unmap_segments, in dl-unmap-segments.h) that shared objects >>>> are always laid out with all segments contiguous (or with gaps >>>> @@ -52,11 +93,20 @@ _dl_map_segments (struct link_map *l, int fd, >>>> c->mapstart & GLRO(dl_use_load_bias)) >>>> - MAP_BASE_ADDR (l)); >>>> >>>> - /* Remember which part of the address space this object uses. */ >>>> - l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, >>>> + /* During mapping, align the mapping address of the LOAD segments >>>> + according to own p_align. This helps OS map its code segment to >>>> + huge pages. */ >>>> + if (l->l_load_align > GLRO(dl_pagesize)) { >>>> + l->l_map_start = (ElfW(Addr)) _dl_map_segments_align (c, >>>> + mappref, fd, >>>> + l->l_load_align, maplength); >>>> + } else { >>>> + /* Remember which part of the address space this object uses. */ >>>> + l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, >>>> c->prot, >>>> MAP_COPY|MAP_FILE, >>>> fd, c->mapoff); >>> >>> Please follow the glibc coding format. >> OK >> >> Thanks. >>> >>>> + } >>>> if (__glibc_unlikely ((void *) l->l_map_start == MAP_FAILED)) >>>> return DL_MAP_SEGMENTS_ERROR_MAP_SEGMENT; >>>> >>>> diff --git a/include/link.h b/include/link.h >>>> index aea268439c..fc6ce29fab 100644 >>>> --- a/include/link.h >>>> +++ b/include/link.h >>>> @@ -298,6 +298,9 @@ struct link_map >>>> >>>> /* Thread-local storage related info. */ >>>> >>>> + /* Alignment requirement of the LOAD block. */ >>>> + size_t l_load_align; >>>> + >>>> /* Start of the initialization image. */ >>>> void *l_tls_initimage; >>>> /* Size of the initialization image. */ >>>> -- >>>> 2.27.0 >>>> >>> >>> Thanks. >>> > > > ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH RFC 1/1] elf: align the mapping address of LOAD segments with p_align 2021-12-08 3:04 ` Rongwei Wang @ 2021-12-08 23:52 ` H.J. Lu 2021-12-09 1:43 ` Rongwei Wang 0 siblings, 1 reply; 42+ messages in thread From: H.J. Lu @ 2021-12-08 23:52 UTC (permalink / raw) To: Rongwei Wang; +Cc: GNU C Library, xuyu, gavin.dg On Tue, Dec 7, 2021 at 7:05 PM Rongwei Wang <rongwei.wang@linux.alibaba.com> wrote: > > > > On 12/8/21 10:33 AM, H.J. Lu wrote: > > On Tue, Dec 7, 2021 at 6:14 PM Rongwei Wang > > <rongwei.wang@linux.alibaba.com> wrote: > >> > >> Hi hjl > >> > >> On 12/6/21 10:48 PM, H.J. Lu wrote: > >>> On Fri, Dec 3, 2021 at 9:00 PM Rongwei Wang via Libc-alpha > >>> <libc-alpha@sourceware.org> wrote: > >>>> > >>>> Now, ld.so always map the LOAD segments and aligned by base > >>>> page size (e.g. 4k in x86 or 4k, 16k and 64k in arm64). And > >>>> this patch improve the scheme here. In this patch, ld.so > >>>> can align the mapping address of the first LOAD segment with > >>>> p_align when p_align is greater than the current base page > >>>> size. > >>> > >>> This is a bug fix. Please open a glibc bug: > >>> > >>> https://sourceware.org/bugzilla/enter_bug.cgiOK > >> > >> And I requesting the account. > >>> > >>> with a testcase which should align variables to 2MB in the main > >> By the way, I have a question about whether we need to align each LOAD > >> segments? In our patch, we only fixed the mapping address for the first > >> LOAD segment. > > > > I think the first one should be sufficient. You can verify it with a > > 2MB aligned variable in PIE: > > > > [hjl@gnu-cfl-2 tmp]$ cat x.c > > #include <stdio.h> > > > > int foo __attribute__((aligned(0x200000))) = 1; > > > > int > > main () > > { > > printf ("foo: %p\n", &foo); > > } > > [hjl@gnu-cfl-2 tmp]$ gcc -no-pie x.c > > [hjl@gnu-cfl-2 tmp]$ ./a.out > > foo: 0x800000 > > [hjl@gnu-cfl-2 tmp]$ gcc x.c -fpie -pie > > [hjl@gnu-cfl-2 tmp]$ ./a.out > > foo: 0x55c529afe000 > > [hjl@gnu-cfl-2 tmp]$ > > Learned it! I opened: https://sourceware.org/bugzilla/show_bug.cgi?id=28676 > Thanks. > > > >>> program and a shared library. Please include the testcase in > >>> your patch and mention the glibc bug in your commit message. > >>> > >>>> And this change makes code segments using huge pages become > >>>> simple and available. > >>>> > >>>> Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com> > >>>> --- > >>>> elf/dl-load.c | 1 + > >>>> elf/dl-map-segments.h | 54 +++++++++++++++++++++++++++++++++++++++++-- > >>>> include/link.h | 3 +++ > >>>> 3 files changed, 56 insertions(+), 2 deletions(-) > >>>> > >>>> diff --git a/elf/dl-load.c b/elf/dl-load.c > >>>> index e39980fb19..136cfe2fa8 100644 > >>>> --- a/elf/dl-load.c > >>>> +++ b/elf/dl-load.c > >>>> @@ -1154,6 +1154,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd, > >>>> c->dataend = ph->p_vaddr + ph->p_filesz; > >>>> c->allocend = ph->p_vaddr + ph->p_memsz; > >>>> c->mapoff = ALIGN_DOWN (ph->p_offset, GLRO(dl_pagesize)); > >>>> + l->l_load_align = ph->p_align; > >>>> > >>>> /* Determine whether there is a gap between the last segment > >>>> and this one. */ > >>>> diff --git a/elf/dl-map-segments.h b/elf/dl-map-segments.h > >>>> index ac9f09ab4c..ae03236045 100644 > >>>> --- a/elf/dl-map-segments.h > >>>> +++ b/elf/dl-map-segments.h > >>>> @@ -18,6 +18,47 @@ > >>>> > >>>> #include <dl-load.h> > >>>> > >>>> +static __always_inline void * > >>>> +_dl_map_segments_align (const struct loadcmd *c, > >>>> + ElfW(Addr) mappref, int fd, size_t alignment, > >>>> + const size_t maplength) > >>>> +{ > >>>> + unsigned long map_start, map_start_align, map_end; > >>>> + unsigned long maplen = (maplength >= alignment) ? > >>>> + (maplength + alignment) : (2 * alignment); > >>>> + > >>>> + /* Allocate enough space to ensure that address aligned by > >>>> + p_align is included. */ > >>>> + map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplen, > >>>> + PROT_NONE, > >>>> + MAP_ANONYMOUS | MAP_PRIVATE, > >>>> + -1, 0); > >>>> + if (__glibc_unlikely ((void *) map_start == MAP_FAILED)) { > >>>> + /* If mapping a aligned address failed, then ... */ > >>>> + map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, > >>>> + c->prot, > >>>> + MAP_COPY|MAP_FILE, > >>>> + fd, c->mapoff); > >>>> + > >>>> + return (void *) map_start; > >>>> + } > >>>> + map_start_align = ALIGN_UP(map_start, alignment); > >>>> + map_end = map_start_align + maplength; > >>>> + > >>>> + /* Remember which part of the address space this object uses. */ > >>>> + map_start_align = (ElfW(Addr)) __mmap ((void *) map_start_align, maplength, > >>>> + c->prot, > >>>> + MAP_COPY|MAP_FILE|MAP_FIXED, > >>>> + fd, c->mapoff); > >>>> + if (__glibc_unlikely ((void *) map_start_align == MAP_FAILED)) > >>>> + return MAP_FAILED; > >>>> + if (map_start_align > map_start) > >>>> + __munmap((void *)map_start, map_start_align - map_start); > >>>> + __munmap((void *)map_end, map_start + maplen - map_end); > >>>> + > >>>> + return (void *) map_start_align; > >>>> +} > >>>> + > >>> > >>> Please follow the glibc coding format. > >>> > >>>> /* This implementation assumes (as does the corresponding implementation > >>>> of _dl_unmap_segments, in dl-unmap-segments.h) that shared objects > >>>> are always laid out with all segments contiguous (or with gaps > >>>> @@ -52,11 +93,20 @@ _dl_map_segments (struct link_map *l, int fd, > >>>> c->mapstart & GLRO(dl_use_load_bias)) > >>>> - MAP_BASE_ADDR (l)); > >>>> > >>>> - /* Remember which part of the address space this object uses. */ > >>>> - l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, > >>>> + /* During mapping, align the mapping address of the LOAD segments > >>>> + according to own p_align. This helps OS map its code segment to > >>>> + huge pages. */ > >>>> + if (l->l_load_align > GLRO(dl_pagesize)) { > >>>> + l->l_map_start = (ElfW(Addr)) _dl_map_segments_align (c, > >>>> + mappref, fd, > >>>> + l->l_load_align, maplength); > >>>> + } else { > >>>> + /* Remember which part of the address space this object uses. */ > >>>> + l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, > >>>> c->prot, > >>>> MAP_COPY|MAP_FILE, > >>>> fd, c->mapoff); > >>> > >>> Please follow the glibc coding format. > >> OK > >> > >> Thanks. > >>> > >>>> + } > >>>> if (__glibc_unlikely ((void *) l->l_map_start == MAP_FAILED)) > >>>> return DL_MAP_SEGMENTS_ERROR_MAP_SEGMENT; > >>>> > >>>> diff --git a/include/link.h b/include/link.h > >>>> index aea268439c..fc6ce29fab 100644 > >>>> --- a/include/link.h > >>>> +++ b/include/link.h > >>>> @@ -298,6 +298,9 @@ struct link_map > >>>> > >>>> /* Thread-local storage related info. */ > >>>> > >>>> + /* Alignment requirement of the LOAD block. */ > >>>> + size_t l_load_align; > >>>> + > >>>> /* Start of the initialization image. */ > >>>> void *l_tls_initimage; > >>>> /* Size of the initialization image. */ > >>>> -- > >>>> 2.27.0 > >>>> > >>> > >>> Thanks. > >>> > > > > > > -- H.J. ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH RFC 1/1] elf: align the mapping address of LOAD segments with p_align 2021-12-08 23:52 ` H.J. Lu @ 2021-12-09 1:43 ` Rongwei Wang 0 siblings, 0 replies; 42+ messages in thread From: Rongwei Wang @ 2021-12-09 1:43 UTC (permalink / raw) To: H.J. Lu; +Cc: GNU C Library, xuyu, gavin.dg On 12/9/21 7:52 AM, H.J. Lu wrote: > On Tue, Dec 7, 2021 at 7:05 PM Rongwei Wang > <rongwei.wang@linux.alibaba.com> wrote: >> >> >> >> On 12/8/21 10:33 AM, H.J. Lu wrote: >>> On Tue, Dec 7, 2021 at 6:14 PM Rongwei Wang >>> <rongwei.wang@linux.alibaba.com> wrote: >>>> >>>> Hi hjl >>>> >>>> On 12/6/21 10:48 PM, H.J. Lu wrote: >>>>> On Fri, Dec 3, 2021 at 9:00 PM Rongwei Wang via Libc-alpha >>>>> <libc-alpha@sourceware.org> wrote: >>>>>> >>>>>> Now, ld.so always map the LOAD segments and aligned by base >>>>>> page size (e.g. 4k in x86 or 4k, 16k and 64k in arm64). And >>>>>> this patch improve the scheme here. In this patch, ld.so >>>>>> can align the mapping address of the first LOAD segment with >>>>>> p_align when p_align is greater than the current base page >>>>>> size. >>>>> >>>>> This is a bug fix. Please open a glibc bug: >>>>> >>>>> https://sourceware.org/bugzilla/enter_bug.cgiOK >>>> >>>> And I requesting the account. >>>>> >>>>> with a testcase which should align variables to 2MB in the main >>>> By the way, I have a question about whether we need to align each LOAD >>>> segments? In our patch, we only fixed the mapping address for the first >>>> LOAD segment. >>> >>> I think the first one should be sufficient. You can verify it with a >>> 2MB aligned variable in PIE: >>> >>> [hjl@gnu-cfl-2 tmp]$ cat x.c >>> #include <stdio.h> >>> >>> int foo __attribute__((aligned(0x200000))) = 1; >>> >>> int >>> main () >>> { >>> printf ("foo: %p\n", &foo); >>> } >>> [hjl@gnu-cfl-2 tmp]$ gcc -no-pie x.c >>> [hjl@gnu-cfl-2 tmp]$ ./a.out >>> foo: 0x800000 >>> [hjl@gnu-cfl-2 tmp]$ gcc x.c -fpie -pie >>> [hjl@gnu-cfl-2 tmp]$ ./a.out >>> foo: 0x55c529afe000 >>> [hjl@gnu-cfl-2 tmp]$ >> >> Learned it! > > I opened:Thanks. > > https://sourceware.org/bugzilla/show_bug.cgi?id=28676 Hi, I saw your report a kernel bug about PIE: https://bugzilla.kernel.org/show_bug.cgi?id=215275 I remember a related fix patch to this bug is also included in our patchset: https://lore.kernel.org/linux-mm/20211009092658.59665-4-rongwei.wang@linux.alibaba.com/ So, this issue is regarded as a bug by glibc, I can resend this patch to kernel mail list and CC you. Thanks. > >> Thanks. >>> >>>>> program and a shared library. Please include the testcase in >>>>> your patch and mention the glibc bug in your commit message. >>>>> >>>>>> And this change makes code segments using huge pages become >>>>>> simple and available. >>>>>> >>>>>> Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com> >>>>>> --- >>>>>> elf/dl-load.c | 1 + >>>>>> elf/dl-map-segments.h | 54 +++++++++++++++++++++++++++++++++++++++++-- >>>>>> include/link.h | 3 +++ >>>>>> 3 files changed, 56 insertions(+), 2 deletions(-) >>>>>> >>>>>> diff --git a/elf/dl-load.c b/elf/dl-load.c >>>>>> index e39980fb19..136cfe2fa8 100644 >>>>>> --- a/elf/dl-load.c >>>>>> +++ b/elf/dl-load.c >>>>>> @@ -1154,6 +1154,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd, >>>>>> c->dataend = ph->p_vaddr + ph->p_filesz; >>>>>> c->allocend = ph->p_vaddr + ph->p_memsz; >>>>>> c->mapoff = ALIGN_DOWN (ph->p_offset, GLRO(dl_pagesize)); >>>>>> + l->l_load_align = ph->p_align; >>>>>> >>>>>> /* Determine whether there is a gap between the last segment >>>>>> and this one. */ >>>>>> diff --git a/elf/dl-map-segments.h b/elf/dl-map-segments.h >>>>>> index ac9f09ab4c..ae03236045 100644 >>>>>> --- a/elf/dl-map-segments.h >>>>>> +++ b/elf/dl-map-segments.h >>>>>> @@ -18,6 +18,47 @@ >>>>>> >>>>>> #include <dl-load.h> >>>>>> >>>>>> +static __always_inline void * >>>>>> +_dl_map_segments_align (const struct loadcmd *c, >>>>>> + ElfW(Addr) mappref, int fd, size_t alignment, >>>>>> + const size_t maplength) >>>>>> +{ >>>>>> + unsigned long map_start, map_start_align, map_end; >>>>>> + unsigned long maplen = (maplength >= alignment) ? >>>>>> + (maplength + alignment) : (2 * alignment); >>>>>> + >>>>>> + /* Allocate enough space to ensure that address aligned by >>>>>> + p_align is included. */ >>>>>> + map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplen, >>>>>> + PROT_NONE, >>>>>> + MAP_ANONYMOUS | MAP_PRIVATE, >>>>>> + -1, 0); >>>>>> + if (__glibc_unlikely ((void *) map_start == MAP_FAILED)) { >>>>>> + /* If mapping a aligned address failed, then ... */ >>>>>> + map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, >>>>>> + c->prot, >>>>>> + MAP_COPY|MAP_FILE, >>>>>> + fd, c->mapoff); >>>>>> + >>>>>> + return (void *) map_start; >>>>>> + } >>>>>> + map_start_align = ALIGN_UP(map_start, alignment); >>>>>> + map_end = map_start_align + maplength; >>>>>> + >>>>>> + /* Remember which part of the address space this object uses. */ >>>>>> + map_start_align = (ElfW(Addr)) __mmap ((void *) map_start_align, maplength, >>>>>> + c->prot, >>>>>> + MAP_COPY|MAP_FILE|MAP_FIXED, >>>>>> + fd, c->mapoff); >>>>>> + if (__glibc_unlikely ((void *) map_start_align == MAP_FAILED)) >>>>>> + return MAP_FAILED; >>>>>> + if (map_start_align > map_start) >>>>>> + __munmap((void *)map_start, map_start_align - map_start); >>>>>> + __munmap((void *)map_end, map_start + maplen - map_end); >>>>>> + >>>>>> + return (void *) map_start_align; >>>>>> +} >>>>>> + >>>>> >>>>> Please follow the glibc coding format. >>>>> >>>>>> /* This implementation assumes (as does the corresponding implementation >>>>>> of _dl_unmap_segments, in dl-unmap-segments.h) that shared objects >>>>>> are always laid out with all segments contiguous (or with gaps >>>>>> @@ -52,11 +93,20 @@ _dl_map_segments (struct link_map *l, int fd, >>>>>> c->mapstart & GLRO(dl_use_load_bias)) >>>>>> - MAP_BASE_ADDR (l)); >>>>>> >>>>>> - /* Remember which part of the address space this object uses. */ >>>>>> - l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, >>>>>> + /* During mapping, align the mapping address of the LOAD segments >>>>>> + according to own p_align. This helps OS map its code segment to >>>>>> + huge pages. */ >>>>>> + if (l->l_load_align > GLRO(dl_pagesize)) { >>>>>> + l->l_map_start = (ElfW(Addr)) _dl_map_segments_align (c, >>>>>> + mappref, fd, >>>>>> + l->l_load_align, maplength); >>>>>> + } else { >>>>>> + /* Remember which part of the address space this object uses. */ >>>>>> + l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, >>>>>> c->prot, >>>>>> MAP_COPY|MAP_FILE, >>>>>> fd, c->mapoff); >>>>> >>>>> Please follow the glibc coding format. >>>> OK >>>> >>>> Thanks. >>>>> >>>>>> + } >>>>>> if (__glibc_unlikely ((void *) l->l_map_start == MAP_FAILED)) >>>>>> return DL_MAP_SEGMENTS_ERROR_MAP_SEGMENT; >>>>>> >>>>>> diff --git a/include/link.h b/include/link.h >>>>>> index aea268439c..fc6ce29fab 100644 >>>>>> --- a/include/link.h >>>>>> +++ b/include/link.h >>>>>> @@ -298,6 +298,9 @@ struct link_map >>>>>> >>>>>> /* Thread-local storage related info. */ >>>>>> >>>>>> + /* Alignment requirement of the LOAD block. */ >>>>>> + size_t l_load_align; >>>>>> + >>>>>> /* Start of the initialization image. */ >>>>>> void *l_tls_initimage; >>>>>> /* Size of the initialization image. */ >>>>>> -- >>>>>> 2.27.0 >>>>>> >>>>> >>>>> Thanks. >>>>> >>> >>> >>> > > > ^ permalink raw reply [flat|nested] 42+ messages in thread
* [PATCH v5 0/2] fix p_align on PT_LOAD segment in DSO isn't honored 2021-12-04 4:58 [PATCH RFC 0/1] make ld.so map .text LOAD ssegments and aligned by p_align Rongwei Wang 2021-12-04 4:58 ` [PATCH RFC 1/1] elf: align the mapping address of LOAD segments with p_align Rongwei Wang @ 2021-12-10 12:39 ` Rongwei Wang 2021-12-10 12:39 ` [PATCH v5 1/2] elf: Properly align PT_LOAD segments Rongwei Wang ` (2 more replies) 2021-12-13 2:51 ` [PATCH v6 " Rongwei Wang 2 siblings, 3 replies; 42+ messages in thread From: Rongwei Wang @ 2021-12-10 12:39 UTC (permalink / raw) To: libc-alpha, hjl.tools, fweimer; +Cc: xuyu, gavin.dg Hi This patch mainly to fix a reported bug: "p_align on PT_LOAD segment in DSO isn't honored" https://sourceware.org/bugzilla/show_bug.cgi?id=28676 A testcase had been builded by H.J.Lu: https://sourceware.org/bugzilla/attachment.cgi?id=13838 And in Patch 1/1, I also show a simple testcase which modified from H.J.Lu. And a similar bug in ELF binary was also reported: https://bugzilla.kernel.org/show_bug.cgi?id=215275 A related fix patch could been found: https://lore.kernel.org/linux-mm/20211009092658.59665-4-rongwei.wang@linux.alibaba.com/ Originally, we send this patch for introducing .text hugepages, was not aware of it's bug. And now, I am not sure whether kernel maintainer will regards it as a bug. But it deserved try again. Thanks. Changelog: v4 -> v5 - Patch "Add a testcase to check alignment of PT_LOAD segment" add new testcase for PT_LOAD segment - Patch "elf: Properly align PT_LOAD segments" fix map_start to use map_start_aligned when second mmap failed v3 -> v4 - Patch "elf: Properly align PT_LOAD segments" Call unmap when the second mmap fails. v2 -> v3 - Patch "elf: Properly align PT_LOAD segments" move mapalign into 'struct loadcmd' fix some coding style RFC/v1 -> v2 - Patch "elf: align the mapping address of LOAD segments with p_align" fix coding format and add testcase in commit. RFC link: https://patchwork.sourceware.org/project/glibc/patch/20211204045848.71105-2-rongwei.wang@linux.alibaba.com/ H.J. Lu (1): Add a testcase to check alignment of PT_LOAD segment Rongwei Wang (1): elf: Properly align PT_LOAD segments elf/Makefile | 14 +++++++++++-- elf/dl-load.c | 1 + elf/dl-load.h | 2 +- elf/dl-map-segments.h | 49 +++++++++++++++++++++++++++++++++++++++---- elf/tst-align3.c | 37 ++++++++++++++++++++++++++++++++ elf/tst-alignmod3.c | 31 +++++++++++++++++++++++++++ 6 files changed, 127 insertions(+), 7 deletions(-) create mode 100644 elf/tst-align3.c create mode 100644 elf/tst-alignmod3.c -- 2.27.0 ^ permalink raw reply [flat|nested] 42+ messages in thread
* [PATCH v5 1/2] elf: Properly align PT_LOAD segments 2021-12-10 12:39 ` [PATCH v5 0/2] fix p_align on PT_LOAD segment in DSO isn't honored Rongwei Wang @ 2021-12-10 12:39 ` Rongwei Wang 2021-12-10 15:43 ` H.J. Lu 2021-12-10 12:39 ` [PATCH v5 2/2] Add a testcase to check alignment of PT_LOAD segment Rongwei Wang 2021-12-10 13:13 ` [PATCH v5 0/2] fix p_align on PT_LOAD segment in DSO isn't honored H.J. Lu 2 siblings, 1 reply; 42+ messages in thread From: Rongwei Wang @ 2021-12-10 12:39 UTC (permalink / raw) To: libc-alpha, hjl.tools, fweimer; +Cc: xuyu, gavin.dg When PT_LOAD segment alignment > the page size, allocate enough space to ensure that the segment can be properly aligned. And this fix can help code segments use huge pages become simple and available. This fixes [BZ #28676]. Signed-off-by: Xu Yu <xuyu@linux.alibaba.com> Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com> --- elf/dl-load.c | 1 + elf/dl-load.h | 2 +- elf/dl-map-segments.h | 49 +++++++++++++++++++++++++++++++++++++++---- 3 files changed, 47 insertions(+), 5 deletions(-) diff --git a/elf/dl-load.c b/elf/dl-load.c index bf8957e73c..9a23590bf4 100644 --- a/elf/dl-load.c +++ b/elf/dl-load.c @@ -1150,6 +1150,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd, c->mapend = ALIGN_UP (ph->p_vaddr + ph->p_filesz, GLRO(dl_pagesize)); c->dataend = ph->p_vaddr + ph->p_filesz; c->allocend = ph->p_vaddr + ph->p_memsz; + c->mapalign = ph->p_align; c->mapoff = ALIGN_DOWN (ph->p_offset, GLRO(dl_pagesize)); /* Determine whether there is a gap between the last segment diff --git a/elf/dl-load.h b/elf/dl-load.h index e329d49a81..c121e3456c 100644 --- a/elf/dl-load.h +++ b/elf/dl-load.h @@ -74,7 +74,7 @@ ELF_PREFERRED_ADDRESS_DATA; Its details have been expanded out and converted. */ struct loadcmd { - ElfW(Addr) mapstart, mapend, dataend, allocend; + ElfW(Addr) mapstart, mapend, dataend, allocend, mapalign; ElfW(Off) mapoff; int prot; /* PROT_* bits. */ }; diff --git a/elf/dl-map-segments.h b/elf/dl-map-segments.h index f9fb110ee3..74abf324ed 100644 --- a/elf/dl-map-segments.h +++ b/elf/dl-map-segments.h @@ -18,6 +18,50 @@ #include <dl-load.h> +/* Map a segment and align it properly. */ + +static __always_inline ElfW(Addr) +_dl_map_segment (const struct loadcmd *c, ElfW(Addr) mappref, + const size_t maplength, int fd) +{ + if (__glibc_likely (c->mapalign <= GLRO(dl_pagesize))) + return (ElfW(Addr)) __mmap ((void *) mappref, maplength, c->prot, + MAP_COPY|MAP_FILE, fd, c->mapoff); + + /* If the segment alignment > the page size, allocate enough space to + ensure that the segment can be properly aligned. */ + ElfW(Addr) maplen = (maplength >= c->mapalign + ? (maplength + c->mapalign) + : (2 * c->mapalign)); + ElfW(Addr) map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplen, + PROT_NONE, + MAP_ANONYMOUS|MAP_PRIVATE, + -1, 0); + if (__glibc_unlikely ((void *) map_start == MAP_FAILED)) + return map_start; + + ElfW(Addr) map_start_aligned = ALIGN_UP (map_start, c->mapalign); + map_start_aligned = (ElfW(Addr)) __mmap ((void *) map_start_aligned, + maplength, c->prot, + MAP_COPY|MAP_FILE|MAP_FIXED, + fd, c->mapoff); + if (__glibc_unlikely ((void *) map_start_aligned == MAP_FAILED)) + __munmap ((void *) map_start, maplen); + else + { + /* Unmap the unused regions. */ + ElfW(Addr) delta = map_start_aligned - map_start; + if (delta) + __munmap ((void *) map_start, delta); + ElfW(Addr) map_end = map_start_aligned + maplength; + delta = map_start + maplen - map_end; + if (delta) + __munmap ((void *) map_end, delta); + } + + return map_start_aligned; +} + /* This implementation assumes (as does the corresponding implementation of _dl_unmap_segments, in dl-unmap-segments.h) that shared objects are always laid out with all segments contiguous (or with gaps @@ -53,10 +97,7 @@ _dl_map_segments (struct link_map *l, int fd, - MAP_BASE_ADDR (l)); /* Remember which part of the address space this object uses. */ - l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, - c->prot, - MAP_COPY|MAP_FILE, - fd, c->mapoff); + l->l_map_start = _dl_map_segment (c, mappref, maplength, fd); if (__glibc_unlikely ((void *) l->l_map_start == MAP_FAILED)) return DL_MAP_SEGMENTS_ERROR_MAP_SEGMENT; -- 2.27.0 ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 1/2] elf: Properly align PT_LOAD segments 2021-12-10 12:39 ` [PATCH v5 1/2] elf: Properly align PT_LOAD segments Rongwei Wang @ 2021-12-10 15:43 ` H.J. Lu 2021-12-10 15:45 ` Florian Weimer 0 siblings, 1 reply; 42+ messages in thread From: H.J. Lu @ 2021-12-10 15:43 UTC (permalink / raw) To: Rongwei Wang; +Cc: GNU C Library, Florian Weimer, xuyu, gavin.dg On Fri, Dec 10, 2021 at 4:39 AM Rongwei Wang <rongwei.wang@linux.alibaba.com> wrote: > > When PT_LOAD segment alignment > the page size, allocate > enough space to ensure that the segment can be properly > aligned. > > And this fix can help code segments use huge pages become > simple and available. > > This fixes [BZ #28676]. > > Signed-off-by: Xu Yu <xuyu@linux.alibaba.com> > Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com> > --- > elf/dl-load.c | 1 + > elf/dl-load.h | 2 +- > elf/dl-map-segments.h | 49 +++++++++++++++++++++++++++++++++++++++---- > 3 files changed, 47 insertions(+), 5 deletions(-) > > diff --git a/elf/dl-load.c b/elf/dl-load.c > index bf8957e73c..9a23590bf4 100644 > --- a/elf/dl-load.c > +++ b/elf/dl-load.c > @@ -1150,6 +1150,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd, > c->mapend = ALIGN_UP (ph->p_vaddr + ph->p_filesz, GLRO(dl_pagesize)); > c->dataend = ph->p_vaddr + ph->p_filesz; > c->allocend = ph->p_vaddr + ph->p_memsz; > + c->mapalign = ph->p_align; > c->mapoff = ALIGN_DOWN (ph->p_offset, GLRO(dl_pagesize)); > > /* Determine whether there is a gap between the last segment > diff --git a/elf/dl-load.h b/elf/dl-load.h > index e329d49a81..c121e3456c 100644 > --- a/elf/dl-load.h > +++ b/elf/dl-load.h > @@ -74,7 +74,7 @@ ELF_PREFERRED_ADDRESS_DATA; > Its details have been expanded out and converted. */ > struct loadcmd > { > - ElfW(Addr) mapstart, mapend, dataend, allocend; > + ElfW(Addr) mapstart, mapend, dataend, allocend, mapalign; > ElfW(Off) mapoff; > int prot; /* PROT_* bits. */ > }; > diff --git a/elf/dl-map-segments.h b/elf/dl-map-segments.h > index f9fb110ee3..74abf324ed 100644 > --- a/elf/dl-map-segments.h > +++ b/elf/dl-map-segments.h > @@ -18,6 +18,50 @@ > > #include <dl-load.h> > > +/* Map a segment and align it properly. */ > + > +static __always_inline ElfW(Addr) > +_dl_map_segment (const struct loadcmd *c, ElfW(Addr) mappref, > + const size_t maplength, int fd) > +{ > + if (__glibc_likely (c->mapalign <= GLRO(dl_pagesize))) > + return (ElfW(Addr)) __mmap ((void *) mappref, maplength, c->prot, > + MAP_COPY|MAP_FILE, fd, c->mapoff); > + > + /* If the segment alignment > the page size, allocate enough space to > + ensure that the segment can be properly aligned. */ > + ElfW(Addr) maplen = (maplength >= c->mapalign > + ? (maplength + c->mapalign) > + : (2 * c->mapalign)); > + ElfW(Addr) map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplen, > + PROT_NONE, > + MAP_ANONYMOUS|MAP_PRIVATE, > + -1, 0); > + if (__glibc_unlikely ((void *) map_start == MAP_FAILED)) > + return map_start; > + > + ElfW(Addr) map_start_aligned = ALIGN_UP (map_start, c->mapalign); > + map_start_aligned = (ElfW(Addr)) __mmap ((void *) map_start_aligned, > + maplength, c->prot, > + MAP_COPY|MAP_FILE|MAP_FIXED, > + fd, c->mapoff); > + if (__glibc_unlikely ((void *) map_start_aligned == MAP_FAILED)) > + __munmap ((void *) map_start, maplen); > + else > + { > + /* Unmap the unused regions. */ > + ElfW(Addr) delta = map_start_aligned - map_start; > + if (delta) > + __munmap ((void *) map_start, delta); > + ElfW(Addr) map_end = map_start_aligned + maplength; > + delta = map_start + maplen - map_end; > + if (delta) > + __munmap ((void *) map_end, delta); > + } > + > + return map_start_aligned; > +} > + > /* This implementation assumes (as does the corresponding implementation > of _dl_unmap_segments, in dl-unmap-segments.h) that shared objects > are always laid out with all segments contiguous (or with gaps > @@ -53,10 +97,7 @@ _dl_map_segments (struct link_map *l, int fd, > - MAP_BASE_ADDR (l)); > > /* Remember which part of the address space this object uses. */ > - l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, > - c->prot, > - MAP_COPY|MAP_FILE, > - fd, c->mapoff); > + l->l_map_start = _dl_map_segment (c, mappref, maplength, fd); > if (__glibc_unlikely ((void *) l->l_map_start == MAP_FAILED)) > return DL_MAP_SEGMENTS_ERROR_MAP_SEGMENT; > > -- > 2.27.0 > LGTM. Reviewed-by: H.J. Lu <hjl.tools@gmail.com> I will check it for you. Thanks. -- H.J. ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 1/2] elf: Properly align PT_LOAD segments 2021-12-10 15:43 ` H.J. Lu @ 2021-12-10 15:45 ` Florian Weimer 2021-12-10 18:54 ` H.J. Lu 0 siblings, 1 reply; 42+ messages in thread From: Florian Weimer @ 2021-12-10 15:45 UTC (permalink / raw) To: H.J. Lu; +Cc: Rongwei Wang, GNU C Library, xuyu, gavin.dg * H. J. Lu: > Reviewed-by: H.J. Lu <hjl.tools@gmail.com> > > I will check it for you. Please adjust the copyright line due to Signed-off-by. Thanks. Florian ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 1/2] elf: Properly align PT_LOAD segments 2021-12-10 15:45 ` Florian Weimer @ 2021-12-10 18:54 ` H.J. Lu 2021-12-10 18:57 ` H.J. Lu 0 siblings, 1 reply; 42+ messages in thread From: H.J. Lu @ 2021-12-10 18:54 UTC (permalink / raw) To: Florian Weimer; +Cc: Rongwei Wang, GNU C Library, xuyu, gavin.dg [-- Attachment #1: Type: text/plain, Size: 309 bytes --] On Fri, Dec 10, 2021 at 7:45 AM Florian Weimer <fweimer@redhat.com> wrote: > > * H. J. Lu: > > > Reviewed-by: H.J. Lu <hjl.tools@gmail.com> > > > > I will check it for you. > > Please adjust the copyright line due to Signed-off-by. Thanks. > > Florian This is the patch I am checking in. Thanks. -- H.J. [-- Attachment #2: v6-0001-elf-Properly-align-PT_LOAD-segments-BZ-28676.patch --] [-- Type: text/x-patch, Size: 4909 bytes --] From 4c03045231941c3515e8ed99dcbe301e51e937e7 Mon Sep 17 00:00:00 2001 From: Rongwei Wang <rongwei.wang@linux.alibaba.com> Date: Fri, 10 Dec 2021 20:39:10 +0800 Subject: [PATCH v6 1/2] elf: Properly align PT_LOAD segments [BZ #28676] When PT_LOAD segment alignment > the page size, allocate enough space to ensure that the segment can be properly aligned. This change helps code segments use huge pages become simple and available. This fixes [BZ #28676]. Signed-off-by: Xu Yu <xuyu@linux.alibaba.com> Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com> --- elf/dl-load.c | 2 ++ elf/dl-load.h | 3 ++- elf/dl-map-segments.h | 49 +++++++++++++++++++++++++++++++++++++++---- 3 files changed, 49 insertions(+), 5 deletions(-) diff --git a/elf/dl-load.c b/elf/dl-load.c index bf8957e73c..721593135e 100644 --- a/elf/dl-load.c +++ b/elf/dl-load.c @@ -1,5 +1,6 @@ /* Map in a shared object's segments from the file. Copyright (C) 1995-2021 Free Software Foundation, Inc. + Copyright The GNU Toolchain Authors. This file is part of the GNU C Library. The GNU C Library is free software; you can redistribute it and/or @@ -1150,6 +1151,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd, c->mapend = ALIGN_UP (ph->p_vaddr + ph->p_filesz, GLRO(dl_pagesize)); c->dataend = ph->p_vaddr + ph->p_filesz; c->allocend = ph->p_vaddr + ph->p_memsz; + c->mapalign = ph->p_align; c->mapoff = ALIGN_DOWN (ph->p_offset, GLRO(dl_pagesize)); /* Determine whether there is a gap between the last segment diff --git a/elf/dl-load.h b/elf/dl-load.h index e329d49a81..e6dabcb336 100644 --- a/elf/dl-load.h +++ b/elf/dl-load.h @@ -1,5 +1,6 @@ /* Map in a shared object's segments from the file. Copyright (C) 1995-2021 Free Software Foundation, Inc. + Copyright The GNU Toolchain Authors. This file is part of the GNU C Library. The GNU C Library is free software; you can redistribute it and/or @@ -74,7 +75,7 @@ ELF_PREFERRED_ADDRESS_DATA; Its details have been expanded out and converted. */ struct loadcmd { - ElfW(Addr) mapstart, mapend, dataend, allocend; + ElfW(Addr) mapstart, mapend, dataend, allocend, mapalign; ElfW(Off) mapoff; int prot; /* PROT_* bits. */ }; diff --git a/elf/dl-map-segments.h b/elf/dl-map-segments.h index f9fb110ee3..74abf324ed 100644 --- a/elf/dl-map-segments.h +++ b/elf/dl-map-segments.h @@ -18,6 +18,50 @@ #include <dl-load.h> +/* Map a segment and align it properly. */ + +static __always_inline ElfW(Addr) +_dl_map_segment (const struct loadcmd *c, ElfW(Addr) mappref, + const size_t maplength, int fd) +{ + if (__glibc_likely (c->mapalign <= GLRO(dl_pagesize))) + return (ElfW(Addr)) __mmap ((void *) mappref, maplength, c->prot, + MAP_COPY|MAP_FILE, fd, c->mapoff); + + /* If the segment alignment > the page size, allocate enough space to + ensure that the segment can be properly aligned. */ + ElfW(Addr) maplen = (maplength >= c->mapalign + ? (maplength + c->mapalign) + : (2 * c->mapalign)); + ElfW(Addr) map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplen, + PROT_NONE, + MAP_ANONYMOUS|MAP_PRIVATE, + -1, 0); + if (__glibc_unlikely ((void *) map_start == MAP_FAILED)) + return map_start; + + ElfW(Addr) map_start_aligned = ALIGN_UP (map_start, c->mapalign); + map_start_aligned = (ElfW(Addr)) __mmap ((void *) map_start_aligned, + maplength, c->prot, + MAP_COPY|MAP_FILE|MAP_FIXED, + fd, c->mapoff); + if (__glibc_unlikely ((void *) map_start_aligned == MAP_FAILED)) + __munmap ((void *) map_start, maplen); + else + { + /* Unmap the unused regions. */ + ElfW(Addr) delta = map_start_aligned - map_start; + if (delta) + __munmap ((void *) map_start, delta); + ElfW(Addr) map_end = map_start_aligned + maplength; + delta = map_start + maplen - map_end; + if (delta) + __munmap ((void *) map_end, delta); + } + + return map_start_aligned; +} + /* This implementation assumes (as does the corresponding implementation of _dl_unmap_segments, in dl-unmap-segments.h) that shared objects are always laid out with all segments contiguous (or with gaps @@ -53,10 +97,7 @@ _dl_map_segments (struct link_map *l, int fd, - MAP_BASE_ADDR (l)); /* Remember which part of the address space this object uses. */ - l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, - c->prot, - MAP_COPY|MAP_FILE, - fd, c->mapoff); + l->l_map_start = _dl_map_segment (c, mappref, maplength, fd); if (__glibc_unlikely ((void *) l->l_map_start == MAP_FAILED)) return DL_MAP_SEGMENTS_ERROR_MAP_SEGMENT; -- 2.33.1 ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 1/2] elf: Properly align PT_LOAD segments 2021-12-10 18:54 ` H.J. Lu @ 2021-12-10 18:57 ` H.J. Lu 0 siblings, 0 replies; 42+ messages in thread From: H.J. Lu @ 2021-12-10 18:57 UTC (permalink / raw) To: Florian Weimer; +Cc: Rongwei Wang, GNU C Library, xuyu, gavin.dg [-- Attachment #1: Type: text/plain, Size: 447 bytes --] On Fri, Dec 10, 2021 at 10:54 AM H.J. Lu <hjl.tools@gmail.com> wrote: > > On Fri, Dec 10, 2021 at 7:45 AM Florian Weimer <fweimer@redhat.com> wrote: > > > > * H. J. Lu: > > > > > Reviewed-by: H.J. Lu <hjl.tools@gmail.com> > > > > > > I will check it for you. > > > > Please adjust the copyright line due to Signed-off-by. Thanks. > > > > Florian > > This is the patch I am checking in. > > Thanks. > > -- > H.J. This is the right one. -- H.J. [-- Attachment #2: v6-0001-elf-Properly-align-PT_LOAD-segments-BZ-28676.patch --] [-- Type: text/x-patch, Size: 5199 bytes --] From d0e20bc42d5b0b23969cc32c439448e62c4cb832 Mon Sep 17 00:00:00 2001 From: Rongwei Wang <rongwei.wang@linux.alibaba.com> Date: Fri, 10 Dec 2021 20:39:10 +0800 Subject: [PATCH v6 1/2] elf: Properly align PT_LOAD segments [BZ #28676] When PT_LOAD segment alignment > the page size, allocate enough space to ensure that the segment can be properly aligned. This change helps code segments use huge pages become simple and available. This fixes [BZ #28676]. Signed-off-by: Xu Yu <xuyu@linux.alibaba.com> Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com> --- elf/dl-load.c | 2 ++ elf/dl-load.h | 3 ++- elf/dl-map-segments.h | 50 +++++++++++++++++++++++++++++++++++++++---- 3 files changed, 50 insertions(+), 5 deletions(-) diff --git a/elf/dl-load.c b/elf/dl-load.c index bf8957e73c..721593135e 100644 --- a/elf/dl-load.c +++ b/elf/dl-load.c @@ -1,5 +1,6 @@ /* Map in a shared object's segments from the file. Copyright (C) 1995-2021 Free Software Foundation, Inc. + Copyright The GNU Toolchain Authors. This file is part of the GNU C Library. The GNU C Library is free software; you can redistribute it and/or @@ -1150,6 +1151,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd, c->mapend = ALIGN_UP (ph->p_vaddr + ph->p_filesz, GLRO(dl_pagesize)); c->dataend = ph->p_vaddr + ph->p_filesz; c->allocend = ph->p_vaddr + ph->p_memsz; + c->mapalign = ph->p_align; c->mapoff = ALIGN_DOWN (ph->p_offset, GLRO(dl_pagesize)); /* Determine whether there is a gap between the last segment diff --git a/elf/dl-load.h b/elf/dl-load.h index e329d49a81..e6dabcb336 100644 --- a/elf/dl-load.h +++ b/elf/dl-load.h @@ -1,5 +1,6 @@ /* Map in a shared object's segments from the file. Copyright (C) 1995-2021 Free Software Foundation, Inc. + Copyright The GNU Toolchain Authors. This file is part of the GNU C Library. The GNU C Library is free software; you can redistribute it and/or @@ -74,7 +75,7 @@ ELF_PREFERRED_ADDRESS_DATA; Its details have been expanded out and converted. */ struct loadcmd { - ElfW(Addr) mapstart, mapend, dataend, allocend; + ElfW(Addr) mapstart, mapend, dataend, allocend, mapalign; ElfW(Off) mapoff; int prot; /* PROT_* bits. */ }; diff --git a/elf/dl-map-segments.h b/elf/dl-map-segments.h index f9fb110ee3..70a4c40695 100644 --- a/elf/dl-map-segments.h +++ b/elf/dl-map-segments.h @@ -1,5 +1,6 @@ /* Map in a shared object's segments. Generic version. Copyright (C) 1995-2021 Free Software Foundation, Inc. + Copyright The GNU Toolchain Authors. This file is part of the GNU C Library. The GNU C Library is free software; you can redistribute it and/or @@ -18,6 +19,50 @@ #include <dl-load.h> +/* Map a segment and align it properly. */ + +static __always_inline ElfW(Addr) +_dl_map_segment (const struct loadcmd *c, ElfW(Addr) mappref, + const size_t maplength, int fd) +{ + if (__glibc_likely (c->mapalign <= GLRO(dl_pagesize))) + return (ElfW(Addr)) __mmap ((void *) mappref, maplength, c->prot, + MAP_COPY|MAP_FILE, fd, c->mapoff); + + /* If the segment alignment > the page size, allocate enough space to + ensure that the segment can be properly aligned. */ + ElfW(Addr) maplen = (maplength >= c->mapalign + ? (maplength + c->mapalign) + : (2 * c->mapalign)); + ElfW(Addr) map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplen, + PROT_NONE, + MAP_ANONYMOUS|MAP_PRIVATE, + -1, 0); + if (__glibc_unlikely ((void *) map_start == MAP_FAILED)) + return map_start; + + ElfW(Addr) map_start_aligned = ALIGN_UP (map_start, c->mapalign); + map_start_aligned = (ElfW(Addr)) __mmap ((void *) map_start_aligned, + maplength, c->prot, + MAP_COPY|MAP_FILE|MAP_FIXED, + fd, c->mapoff); + if (__glibc_unlikely ((void *) map_start_aligned == MAP_FAILED)) + __munmap ((void *) map_start, maplen); + else + { + /* Unmap the unused regions. */ + ElfW(Addr) delta = map_start_aligned - map_start; + if (delta) + __munmap ((void *) map_start, delta); + ElfW(Addr) map_end = map_start_aligned + maplength; + delta = map_start + maplen - map_end; + if (delta) + __munmap ((void *) map_end, delta); + } + + return map_start_aligned; +} + /* This implementation assumes (as does the corresponding implementation of _dl_unmap_segments, in dl-unmap-segments.h) that shared objects are always laid out with all segments contiguous (or with gaps @@ -53,10 +98,7 @@ _dl_map_segments (struct link_map *l, int fd, - MAP_BASE_ADDR (l)); /* Remember which part of the address space this object uses. */ - l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, - c->prot, - MAP_COPY|MAP_FILE, - fd, c->mapoff); + l->l_map_start = _dl_map_segment (c, mappref, maplength, fd); if (__glibc_unlikely ((void *) l->l_map_start == MAP_FAILED)) return DL_MAP_SEGMENTS_ERROR_MAP_SEGMENT; -- 2.33.1 ^ permalink raw reply [flat|nested] 42+ messages in thread
* [PATCH v5 2/2] Add a testcase to check alignment of PT_LOAD segment 2021-12-10 12:39 ` [PATCH v5 0/2] fix p_align on PT_LOAD segment in DSO isn't honored Rongwei Wang 2021-12-10 12:39 ` [PATCH v5 1/2] elf: Properly align PT_LOAD segments Rongwei Wang @ 2021-12-10 12:39 ` Rongwei Wang 2021-12-10 13:48 ` Adhemerval Zanella 2021-12-10 13:13 ` [PATCH v5 0/2] fix p_align on PT_LOAD segment in DSO isn't honored H.J. Lu 2 siblings, 1 reply; 42+ messages in thread From: Rongwei Wang @ 2021-12-10 12:39 UTC (permalink / raw) To: libc-alpha, hjl.tools, fweimer; +Cc: xuyu, gavin.dg From: "H.J. Lu" <hjl.tools@gmail.com> This patch adds a testcase for PT_LOAD segment to check it is properly aligned when the alignment > the page size. Signed-off-by: "H.J. Lu" <hjl.tools@gmail.com> Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com> --- elf/Makefile | 14 ++++++++++++-- elf/tst-align3.c | 37 +++++++++++++++++++++++++++++++++++++ elf/tst-alignmod3.c | 31 +++++++++++++++++++++++++++++++ 3 files changed, 80 insertions(+), 2 deletions(-) create mode 100644 elf/tst-align3.c create mode 100644 elf/tst-alignmod3.c diff --git a/elf/Makefile b/elf/Makefile index ef36008673..b16128ac8b 100644 --- a/elf/Makefile +++ b/elf/Makefile @@ -207,7 +207,7 @@ tests += restest1 preloadtest loadfail multiload origtest resolvfail \ tst-tls4 tst-tls5 \ tst-tls10 tst-tls11 tst-tls12 tst-tls13 tst-tls14 tst-tls15 \ tst-tls16 tst-tls17 tst-tls18 tst-tls19 tst-tls-dlinfo \ - tst-align tst-align2 \ + tst-align tst-align2 tst-align3 \ tst-dlmodcount tst-dlopenrpath tst-deep1 \ tst-dlmopen1 tst-dlmopen3 tst-dlmopen4 \ unload3 unload4 unload5 unload6 unload7 unload8 tst-global1 order2 \ @@ -241,6 +241,9 @@ tests-internal += loadtest unload unload2 circleload1 \ tests-container += tst-pldd tst-dlopen-tlsmodid-container \ tst-dlopen-self-container tst-preload-pthread-libc test-srcs = tst-pathopt +ifeq (yes,$(have-fpie)) +tests-pie += tst-align3 +endif selinux-enabled := $(shell cat /selinux/enforce 2> /dev/null) ifneq ($(selinux-enabled),1) tests-execstack-yes = tst-execstack tst-execstack-needed tst-execstack-prog @@ -302,7 +305,7 @@ modules-names = testobj1 testobj2 testobj3 testobj4 testobj5 testobj6 \ circlemod3 circlemod3a \ reldep8mod1 reldep8mod2 reldep8mod3 \ reldep9mod1 reldep9mod2 reldep9mod3 \ - tst-alignmod tst-alignmod2 \ + tst-alignmod tst-alignmod2 tst-alignmod3 \ $(modules-execstack-$(have-z-execstack)) \ tst-dlopenrpathmod tst-deep1mod1 tst-deep1mod2 tst-deep1mod3 \ tst-dlmopen1mod tst-auditmod1 \ @@ -1088,6 +1091,13 @@ CFLAGS-tst-alignmod.c += $(stack-align-test-flags) CFLAGS-tst-alignmod2.c += $(stack-align-test-flags) $(objpfx)tst-align.out: $(objpfx)tst-alignmod.so $(objpfx)tst-align2: $(objpfx)tst-alignmod2.so +$(objpfx)tst-align3: $(objpfx)tst-alignmod3.so +ifeq (yes,$(have-fpie)) +CFLAGS-tst-align3.c += $(PIE-ccflag) +endif +LDFLAGS-tst-align3 += -Wl,-z,max-page-size=0x200000 +LDFLAGS-tst-alignmod3.so += -Wl,-z,max-page-size=0x200000 +$(objpfx)tst-alignmod3.so: $(libsupport) $(objpfx)unload3.out: $(objpfx)unload3mod1.so $(objpfx)unload3mod2.so \ $(objpfx)unload3mod3.so $(objpfx)unload3mod4.so diff --git a/elf/tst-align3.c b/elf/tst-align3.c new file mode 100644 index 0000000000..5697c0bbaf --- /dev/null +++ b/elf/tst-align3.c @@ -0,0 +1,37 @@ +/* Check alignment of PT_LOAD segment in a shared library. + Copyright (C) 2021 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + <https://www.gnu.org/licenses/>. */ + +#include <support/check.h> +#include <tst-stack-align.h> + +#define ALIGN 0x200000 + +int bar __attribute__ ((aligned (ALIGN))) = 1; + +extern int do_load_test (void); + +static int +do_test (void) +{ + printf ("bar: %p\n", &bar); + TEST_VERIFY (is_aligned (&bar, ALIGN) == 0); + + return do_load_test (); +} + +#include <support/test-driver.c> diff --git a/elf/tst-alignmod3.c b/elf/tst-alignmod3.c new file mode 100644 index 0000000000..50ec08462c --- /dev/null +++ b/elf/tst-alignmod3.c @@ -0,0 +1,31 @@ +/* Check alignment of PT_LOAD segment in a shared library. + Copyright (C) 2021 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + <https://www.gnu.org/licenses/>. */ + +#include <support/check.h> +#include <tst-stack-align.h> + +#define ALIGN 0x200000 + +int foo __attribute__ ((aligned (ALIGN))) = 1; + +void +do_load_test (void) +{ + printf ("foo: %p\n", &foo); + TEST_VERIFY (is_aligned (&foo, ALIGN) == 0); +} -- 2.27.0 ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 2/2] Add a testcase to check alignment of PT_LOAD segment 2021-12-10 12:39 ` [PATCH v5 2/2] Add a testcase to check alignment of PT_LOAD segment Rongwei Wang @ 2021-12-10 13:48 ` Adhemerval Zanella 2021-12-10 15:41 ` H.J. Lu 0 siblings, 1 reply; 42+ messages in thread From: Adhemerval Zanella @ 2021-12-10 13:48 UTC (permalink / raw) To: Rongwei Wang, libc-alpha, hjl.tools, fweimer; +Cc: xuyu, gavin.dg On 10/12/2021 09:39, Rongwei Wang via Libc-alpha wrote: > From: "H.J. Lu" <hjl.tools@gmail.com> > > This patch adds a testcase for PT_LOAD segment to check it is > properly aligned when the alignment > the page size. > > Signed-off-by: "H.J. Lu" <hjl.tools@gmail.com> > Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com> > --- > elf/Makefile | 14 ++++++++++++-- > elf/tst-align3.c | 37 +++++++++++++++++++++++++++++++++++++ > elf/tst-alignmod3.c | 31 +++++++++++++++++++++++++++++++ > 3 files changed, 80 insertions(+), 2 deletions(-) > create mode 100644 elf/tst-align3.c > create mode 100644 elf/tst-alignmod3.c > > diff --git a/elf/Makefile b/elf/Makefile > index ef36008673..b16128ac8b 100644 > --- a/elf/Makefile > +++ b/elf/Makefile > @@ -207,7 +207,7 @@ tests += restest1 preloadtest loadfail multiload origtest resolvfail \ > tst-tls4 tst-tls5 \ > tst-tls10 tst-tls11 tst-tls12 tst-tls13 tst-tls14 tst-tls15 \ > tst-tls16 tst-tls17 tst-tls18 tst-tls19 tst-tls-dlinfo \ > - tst-align tst-align2 \ > + tst-align tst-align2 tst-align3 \ > tst-dlmodcount tst-dlopenrpath tst-deep1 \ > tst-dlmopen1 tst-dlmopen3 tst-dlmopen4 \ > unload3 unload4 unload5 unload6 unload7 unload8 tst-global1 order2 \ > @@ -241,6 +241,9 @@ tests-internal += loadtest unload unload2 circleload1 \ > tests-container += tst-pldd tst-dlopen-tlsmodid-container \ > tst-dlopen-self-container tst-preload-pthread-libc > test-srcs = tst-pathopt > +ifeq (yes,$(have-fpie)) > +tests-pie += tst-align3 > +endif > selinux-enabled := $(shell cat /selinux/enforce 2> /dev/null) > ifneq ($(selinux-enabled),1) > tests-execstack-yes = tst-execstack tst-execstack-needed tst-execstack-prog > @@ -302,7 +305,7 @@ modules-names = testobj1 testobj2 testobj3 testobj4 testobj5 testobj6 \ > circlemod3 circlemod3a \ > reldep8mod1 reldep8mod2 reldep8mod3 \ > reldep9mod1 reldep9mod2 reldep9mod3 \ > - tst-alignmod tst-alignmod2 \ > + tst-alignmod tst-alignmod2 tst-alignmod3 \ > $(modules-execstack-$(have-z-execstack)) \ > tst-dlopenrpathmod tst-deep1mod1 tst-deep1mod2 tst-deep1mod3 \ > tst-dlmopen1mod tst-auditmod1 \ > @@ -1088,6 +1091,13 @@ CFLAGS-tst-alignmod.c += $(stack-align-test-flags) > CFLAGS-tst-alignmod2.c += $(stack-align-test-flags) > $(objpfx)tst-align.out: $(objpfx)tst-alignmod.so > $(objpfx)tst-align2: $(objpfx)tst-alignmod2.so > +$(objpfx)tst-align3: $(objpfx)tst-alignmod3.so > +ifeq (yes,$(have-fpie)) > +CFLAGS-tst-align3.c += $(PIE-ccflag) > +endif > +LDFLAGS-tst-align3 += -Wl,-z,max-page-size=0x200000 > +LDFLAGS-tst-alignmod3.so += -Wl,-z,max-page-size=0x200000 > +$(objpfx)tst-alignmod3.so: $(libsupport) > > $(objpfx)unload3.out: $(objpfx)unload3mod1.so $(objpfx)unload3mod2.so \ > $(objpfx)unload3mod3.so $(objpfx)unload3mod4.so > diff --git a/elf/tst-align3.c b/elf/tst-align3.c > new file mode 100644 > index 0000000000..5697c0bbaf > --- /dev/null > +++ b/elf/tst-align3.c > @@ -0,0 +1,37 @@ > +/* Check alignment of PT_LOAD segment in a shared library. > + Copyright (C) 2021 Free Software Foundation, Inc. > + This file is part of the GNU C Library. > + > + The GNU C Library is free software; you can redistribute it and/or > + modify it under the terms of the GNU Lesser General Public > + License as published by the Free Software Foundation; either > + version 2.1 of the License, or (at your option) any later version. > + > + The GNU C Library is distributed in the hope that it will be useful, > + but WITHOUT ANY WARRANTY; without even the implied warranty of > + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU > + Lesser General Public License for more details. > + > + You should have received a copy of the GNU Lesser General Public > + License along with the GNU C Library; if not, see > + <https://www.gnu.org/licenses/>. */ > + > +#include <support/check.h> > +#include <tst-stack-align.h> > + > +#define ALIGN 0x200000 > + > +int bar __attribute__ ((aligned (ALIGN))) = 1; > + > +extern int do_load_test (void); > + > +static int > +do_test (void) > +{ > + printf ("bar: %p\n", &bar); > + TEST_VERIFY (is_aligned (&bar, ALIGN) == 0); > + > + return do_load_test (); > +} > + > +#include <support/test-driver.c> > diff --git a/elf/tst-alignmod3.c b/elf/tst-alignmod3.c > new file mode 100644 > index 0000000000..50ec08462c > --- /dev/null > +++ b/elf/tst-alignmod3.c > @@ -0,0 +1,31 @@ > +/* Check alignment of PT_LOAD segment in a shared library. > + Copyright (C) 2021 Free Software Foundation, Inc. > + This file is part of the GNU C Library. > + > + The GNU C Library is free software; you can redistribute it and/or > + modify it under the terms of the GNU Lesser General Public > + License as published by the Free Software Foundation; either > + version 2.1 of the License, or (at your option) any later version. > + > + The GNU C Library is distributed in the hope that it will be useful, > + but WITHOUT ANY WARRANTY; without even the implied warranty of > + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU > + Lesser General Public License for more details. > + > + You should have received a copy of the GNU Lesser General Public > + License along with the GNU C Library; if not, see > + <https://www.gnu.org/licenses/>. */ > + > +#include <support/check.h> > +#include <tst-stack-align.h> > + > +#define ALIGN 0x200000 I think it should cover all possible pagesize we current support. Maybe add a comment here or on Makefile about it. > + > +int foo __attribute__ ((aligned (ALIGN))) = 1; > + > +void > +do_load_test (void) > +{ > + printf ("foo: %p\n", &foo); > + TEST_VERIFY (is_aligned (&foo, ALIGN) == 0); > +} > ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 2/2] Add a testcase to check alignment of PT_LOAD segment 2021-12-10 13:48 ` Adhemerval Zanella @ 2021-12-10 15:41 ` H.J. Lu 2021-12-10 18:56 ` H.J. Lu 0 siblings, 1 reply; 42+ messages in thread From: H.J. Lu @ 2021-12-10 15:41 UTC (permalink / raw) To: Adhemerval Zanella Cc: Rongwei Wang, GNU C Library, Florian Weimer, xuyu, gavin.dg [-- Attachment #1: Type: text/plain, Size: 6359 bytes --] On Fri, Dec 10, 2021 at 5:48 AM Adhemerval Zanella <adhemerval.zanella@linaro.org> wrote: > > > > On 10/12/2021 09:39, Rongwei Wang via Libc-alpha wrote: > > From: "H.J. Lu" <hjl.tools@gmail.com> > > > > This patch adds a testcase for PT_LOAD segment to check it is > > properly aligned when the alignment > the page size. > > > > Signed-off-by: "H.J. Lu" <hjl.tools@gmail.com> > > Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com> > > --- > > elf/Makefile | 14 ++++++++++++-- > > elf/tst-align3.c | 37 +++++++++++++++++++++++++++++++++++++ > > elf/tst-alignmod3.c | 31 +++++++++++++++++++++++++++++++ > > 3 files changed, 80 insertions(+), 2 deletions(-) > > create mode 100644 elf/tst-align3.c > > create mode 100644 elf/tst-alignmod3.c > > > > diff --git a/elf/Makefile b/elf/Makefile > > index ef36008673..b16128ac8b 100644 > > --- a/elf/Makefile > > +++ b/elf/Makefile > > @@ -207,7 +207,7 @@ tests += restest1 preloadtest loadfail multiload origtest resolvfail \ > > tst-tls4 tst-tls5 \ > > tst-tls10 tst-tls11 tst-tls12 tst-tls13 tst-tls14 tst-tls15 \ > > tst-tls16 tst-tls17 tst-tls18 tst-tls19 tst-tls-dlinfo \ > > - tst-align tst-align2 \ > > + tst-align tst-align2 tst-align3 \ > > tst-dlmodcount tst-dlopenrpath tst-deep1 \ > > tst-dlmopen1 tst-dlmopen3 tst-dlmopen4 \ > > unload3 unload4 unload5 unload6 unload7 unload8 tst-global1 order2 \ > > @@ -241,6 +241,9 @@ tests-internal += loadtest unload unload2 circleload1 \ > > tests-container += tst-pldd tst-dlopen-tlsmodid-container \ > > tst-dlopen-self-container tst-preload-pthread-libc > > test-srcs = tst-pathopt > > +ifeq (yes,$(have-fpie)) > > +tests-pie += tst-align3 > > +endif > > selinux-enabled := $(shell cat /selinux/enforce 2> /dev/null) > > ifneq ($(selinux-enabled),1) > > tests-execstack-yes = tst-execstack tst-execstack-needed tst-execstack-prog > > @@ -302,7 +305,7 @@ modules-names = testobj1 testobj2 testobj3 testobj4 testobj5 testobj6 \ > > circlemod3 circlemod3a \ > > reldep8mod1 reldep8mod2 reldep8mod3 \ > > reldep9mod1 reldep9mod2 reldep9mod3 \ > > - tst-alignmod tst-alignmod2 \ > > + tst-alignmod tst-alignmod2 tst-alignmod3 \ > > $(modules-execstack-$(have-z-execstack)) \ > > tst-dlopenrpathmod tst-deep1mod1 tst-deep1mod2 tst-deep1mod3 \ > > tst-dlmopen1mod tst-auditmod1 \ > > @@ -1088,6 +1091,13 @@ CFLAGS-tst-alignmod.c += $(stack-align-test-flags) > > CFLAGS-tst-alignmod2.c += $(stack-align-test-flags) > > $(objpfx)tst-align.out: $(objpfx)tst-alignmod.so > > $(objpfx)tst-align2: $(objpfx)tst-alignmod2.so > > +$(objpfx)tst-align3: $(objpfx)tst-alignmod3.so > > +ifeq (yes,$(have-fpie)) > > +CFLAGS-tst-align3.c += $(PIE-ccflag) > > +endif > > +LDFLAGS-tst-align3 += -Wl,-z,max-page-size=0x200000 > > +LDFLAGS-tst-alignmod3.so += -Wl,-z,max-page-size=0x200000 > > +$(objpfx)tst-alignmod3.so: $(libsupport) > > > > $(objpfx)unload3.out: $(objpfx)unload3mod1.so $(objpfx)unload3mod2.so \ > > $(objpfx)unload3mod3.so $(objpfx)unload3mod4.so > > diff --git a/elf/tst-align3.c b/elf/tst-align3.c > > new file mode 100644 > > index 0000000000..5697c0bbaf > > --- /dev/null > > +++ b/elf/tst-align3.c > > @@ -0,0 +1,37 @@ > > +/* Check alignment of PT_LOAD segment in a shared library. > > + Copyright (C) 2021 Free Software Foundation, Inc. > > + This file is part of the GNU C Library. > > + > > + The GNU C Library is free software; you can redistribute it and/or > > + modify it under the terms of the GNU Lesser General Public > > + License as published by the Free Software Foundation; either > > + version 2.1 of the License, or (at your option) any later version. > > + > > + The GNU C Library is distributed in the hope that it will be useful, > > + but WITHOUT ANY WARRANTY; without even the implied warranty of > > + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU > > + Lesser General Public License for more details. > > + > > + You should have received a copy of the GNU Lesser General Public > > + License along with the GNU C Library; if not, see > > + <https://www.gnu.org/licenses/>. */ > > + > > +#include <support/check.h> > > +#include <tst-stack-align.h> > > + > > +#define ALIGN 0x200000 > > + > > +int bar __attribute__ ((aligned (ALIGN))) = 1; > > + > > +extern int do_load_test (void); > > + > > +static int > > +do_test (void) > > +{ > > + printf ("bar: %p\n", &bar); > > + TEST_VERIFY (is_aligned (&bar, ALIGN) == 0); > > + > > + return do_load_test (); > > +} > > + > > +#include <support/test-driver.c> > > diff --git a/elf/tst-alignmod3.c b/elf/tst-alignmod3.c > > new file mode 100644 > > index 0000000000..50ec08462c > > --- /dev/null > > +++ b/elf/tst-alignmod3.c > > @@ -0,0 +1,31 @@ > > +/* Check alignment of PT_LOAD segment in a shared library. > > + Copyright (C) 2021 Free Software Foundation, Inc. > > + This file is part of the GNU C Library. > > + > > + The GNU C Library is free software; you can redistribute it and/or > > + modify it under the terms of the GNU Lesser General Public > > + License as published by the Free Software Foundation; either > > + version 2.1 of the License, or (at your option) any later version. > > + > > + The GNU C Library is distributed in the hope that it will be useful, > > + but WITHOUT ANY WARRANTY; without even the implied warranty of > > + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU > > + Lesser General Public License for more details. > > + > > + You should have received a copy of the GNU Lesser General Public > > + License along with the GNU C Library; if not, see > > + <https://www.gnu.org/licenses/>. */ > > + > > +#include <support/check.h> > > +#include <tst-stack-align.h> > > + > > +#define ALIGN 0x200000 > > I think it should cover all possible pagesize we current support. Maybe add > a comment here or on Makefile about it. Here is the v2 patch I added /* This should cover all possible page sizes we currently support. */ > > + > > +int foo __attribute__ ((aligned (ALIGN))) = 1; > > + > > +void > > +do_load_test (void) > > +{ > > + printf ("foo: %p\n", &foo); > > + TEST_VERIFY (is_aligned (&foo, ALIGN) == 0); > > +} > > -- H.J. [-- Attachment #2: v2-0001-Add-a-testcase-to-check-alignment-of-PT_LOAD-segm.patch --] [-- Type: text/x-patch, Size: 5328 bytes --] From f5d30f3be4261eb0340de050a5bba586fca55b4f Mon Sep 17 00:00:00 2001 From: "H.J. Lu" <hjl.tools@gmail.com> Date: Thu, 9 Dec 2021 07:01:33 -0800 Subject: [PATCH v2] Add a testcase to check alignment of PT_LOAD segment [BZ #28676] --- elf/Makefile | 14 ++++++++++++-- elf/tst-align3.c | 37 +++++++++++++++++++++++++++++++++++++ elf/tst-alignmod3.c | 32 ++++++++++++++++++++++++++++++++ 3 files changed, 81 insertions(+), 2 deletions(-) create mode 100644 elf/tst-align3.c create mode 100644 elf/tst-alignmod3.c diff --git a/elf/Makefile b/elf/Makefile index ef36008673..b16128ac8b 100644 --- a/elf/Makefile +++ b/elf/Makefile @@ -207,7 +207,7 @@ tests += restest1 preloadtest loadfail multiload origtest resolvfail \ tst-tls4 tst-tls5 \ tst-tls10 tst-tls11 tst-tls12 tst-tls13 tst-tls14 tst-tls15 \ tst-tls16 tst-tls17 tst-tls18 tst-tls19 tst-tls-dlinfo \ - tst-align tst-align2 \ + tst-align tst-align2 tst-align3 \ tst-dlmodcount tst-dlopenrpath tst-deep1 \ tst-dlmopen1 tst-dlmopen3 tst-dlmopen4 \ unload3 unload4 unload5 unload6 unload7 unload8 tst-global1 order2 \ @@ -241,6 +241,9 @@ tests-internal += loadtest unload unload2 circleload1 \ tests-container += tst-pldd tst-dlopen-tlsmodid-container \ tst-dlopen-self-container tst-preload-pthread-libc test-srcs = tst-pathopt +ifeq (yes,$(have-fpie)) +tests-pie += tst-align3 +endif selinux-enabled := $(shell cat /selinux/enforce 2> /dev/null) ifneq ($(selinux-enabled),1) tests-execstack-yes = tst-execstack tst-execstack-needed tst-execstack-prog @@ -302,7 +305,7 @@ modules-names = testobj1 testobj2 testobj3 testobj4 testobj5 testobj6 \ circlemod3 circlemod3a \ reldep8mod1 reldep8mod2 reldep8mod3 \ reldep9mod1 reldep9mod2 reldep9mod3 \ - tst-alignmod tst-alignmod2 \ + tst-alignmod tst-alignmod2 tst-alignmod3 \ $(modules-execstack-$(have-z-execstack)) \ tst-dlopenrpathmod tst-deep1mod1 tst-deep1mod2 tst-deep1mod3 \ tst-dlmopen1mod tst-auditmod1 \ @@ -1088,6 +1091,13 @@ CFLAGS-tst-alignmod.c += $(stack-align-test-flags) CFLAGS-tst-alignmod2.c += $(stack-align-test-flags) $(objpfx)tst-align.out: $(objpfx)tst-alignmod.so $(objpfx)tst-align2: $(objpfx)tst-alignmod2.so +$(objpfx)tst-align3: $(objpfx)tst-alignmod3.so +ifeq (yes,$(have-fpie)) +CFLAGS-tst-align3.c += $(PIE-ccflag) +endif +LDFLAGS-tst-align3 += -Wl,-z,max-page-size=0x200000 +LDFLAGS-tst-alignmod3.so += -Wl,-z,max-page-size=0x200000 +$(objpfx)tst-alignmod3.so: $(libsupport) $(objpfx)unload3.out: $(objpfx)unload3mod1.so $(objpfx)unload3mod2.so \ $(objpfx)unload3mod3.so $(objpfx)unload3mod4.so diff --git a/elf/tst-align3.c b/elf/tst-align3.c new file mode 100644 index 0000000000..5697c0bbaf --- /dev/null +++ b/elf/tst-align3.c @@ -0,0 +1,37 @@ +/* Check alignment of PT_LOAD segment in a shared library. + Copyright (C) 2021 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + <https://www.gnu.org/licenses/>. */ + +#include <support/check.h> +#include <tst-stack-align.h> + +#define ALIGN 0x200000 + +int bar __attribute__ ((aligned (ALIGN))) = 1; + +extern int do_load_test (void); + +static int +do_test (void) +{ + printf ("bar: %p\n", &bar); + TEST_VERIFY (is_aligned (&bar, ALIGN) == 0); + + return do_load_test (); +} + +#include <support/test-driver.c> diff --git a/elf/tst-alignmod3.c b/elf/tst-alignmod3.c new file mode 100644 index 0000000000..0d33f2379d --- /dev/null +++ b/elf/tst-alignmod3.c @@ -0,0 +1,32 @@ +/* Check alignment of PT_LOAD segment in a shared library. + Copyright (C) 2021 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + <https://www.gnu.org/licenses/>. */ + +#include <support/check.h> +#include <tst-stack-align.h> + +/* This should cover all possible page sizes we currently support. */ +#define ALIGN 0x200000 + +int foo __attribute__ ((aligned (ALIGN))) = 1; + +void +do_load_test (void) +{ + printf ("foo: %p\n", &foo); + TEST_VERIFY (is_aligned (&foo, ALIGN) == 0); +} -- 2.33.1 ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 2/2] Add a testcase to check alignment of PT_LOAD segment 2021-12-10 15:41 ` H.J. Lu @ 2021-12-10 18:56 ` H.J. Lu 2021-12-10 20:05 ` Adhemerval Zanella 0 siblings, 1 reply; 42+ messages in thread From: H.J. Lu @ 2021-12-10 18:56 UTC (permalink / raw) To: Adhemerval Zanella Cc: Rongwei Wang, GNU C Library, Florian Weimer, xuyu, gavin.dg [-- Attachment #1: Type: text/plain, Size: 6779 bytes --] On Fri, Dec 10, 2021 at 7:41 AM H.J. Lu <hjl.tools@gmail.com> wrote: > > On Fri, Dec 10, 2021 at 5:48 AM Adhemerval Zanella > <adhemerval.zanella@linaro.org> wrote: > > > > > > > > On 10/12/2021 09:39, Rongwei Wang via Libc-alpha wrote: > > > From: "H.J. Lu" <hjl.tools@gmail.com> > > > > > > This patch adds a testcase for PT_LOAD segment to check it is > > > properly aligned when the alignment > the page size. > > > > > > Signed-off-by: "H.J. Lu" <hjl.tools@gmail.com> > > > Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com> > > > --- > > > elf/Makefile | 14 ++++++++++++-- > > > elf/tst-align3.c | 37 +++++++++++++++++++++++++++++++++++++ > > > elf/tst-alignmod3.c | 31 +++++++++++++++++++++++++++++++ > > > 3 files changed, 80 insertions(+), 2 deletions(-) > > > create mode 100644 elf/tst-align3.c > > > create mode 100644 elf/tst-alignmod3.c > > > > > > diff --git a/elf/Makefile b/elf/Makefile > > > index ef36008673..b16128ac8b 100644 > > > --- a/elf/Makefile > > > +++ b/elf/Makefile > > > @@ -207,7 +207,7 @@ tests += restest1 preloadtest loadfail multiload origtest resolvfail \ > > > tst-tls4 tst-tls5 \ > > > tst-tls10 tst-tls11 tst-tls12 tst-tls13 tst-tls14 tst-tls15 \ > > > tst-tls16 tst-tls17 tst-tls18 tst-tls19 tst-tls-dlinfo \ > > > - tst-align tst-align2 \ > > > + tst-align tst-align2 tst-align3 \ > > > tst-dlmodcount tst-dlopenrpath tst-deep1 \ > > > tst-dlmopen1 tst-dlmopen3 tst-dlmopen4 \ > > > unload3 unload4 unload5 unload6 unload7 unload8 tst-global1 order2 \ > > > @@ -241,6 +241,9 @@ tests-internal += loadtest unload unload2 circleload1 \ > > > tests-container += tst-pldd tst-dlopen-tlsmodid-container \ > > > tst-dlopen-self-container tst-preload-pthread-libc > > > test-srcs = tst-pathopt > > > +ifeq (yes,$(have-fpie)) > > > +tests-pie += tst-align3 > > > +endif > > > selinux-enabled := $(shell cat /selinux/enforce 2> /dev/null) > > > ifneq ($(selinux-enabled),1) > > > tests-execstack-yes = tst-execstack tst-execstack-needed tst-execstack-prog > > > @@ -302,7 +305,7 @@ modules-names = testobj1 testobj2 testobj3 testobj4 testobj5 testobj6 \ > > > circlemod3 circlemod3a \ > > > reldep8mod1 reldep8mod2 reldep8mod3 \ > > > reldep9mod1 reldep9mod2 reldep9mod3 \ > > > - tst-alignmod tst-alignmod2 \ > > > + tst-alignmod tst-alignmod2 tst-alignmod3 \ > > > $(modules-execstack-$(have-z-execstack)) \ > > > tst-dlopenrpathmod tst-deep1mod1 tst-deep1mod2 tst-deep1mod3 \ > > > tst-dlmopen1mod tst-auditmod1 \ > > > @@ -1088,6 +1091,13 @@ CFLAGS-tst-alignmod.c += $(stack-align-test-flags) > > > CFLAGS-tst-alignmod2.c += $(stack-align-test-flags) > > > $(objpfx)tst-align.out: $(objpfx)tst-alignmod.so > > > $(objpfx)tst-align2: $(objpfx)tst-alignmod2.so > > > +$(objpfx)tst-align3: $(objpfx)tst-alignmod3.so > > > +ifeq (yes,$(have-fpie)) > > > +CFLAGS-tst-align3.c += $(PIE-ccflag) > > > +endif > > > +LDFLAGS-tst-align3 += -Wl,-z,max-page-size=0x200000 > > > +LDFLAGS-tst-alignmod3.so += -Wl,-z,max-page-size=0x200000 > > > +$(objpfx)tst-alignmod3.so: $(libsupport) > > > > > > $(objpfx)unload3.out: $(objpfx)unload3mod1.so $(objpfx)unload3mod2.so \ > > > $(objpfx)unload3mod3.so $(objpfx)unload3mod4.so > > > diff --git a/elf/tst-align3.c b/elf/tst-align3.c > > > new file mode 100644 > > > index 0000000000..5697c0bbaf > > > --- /dev/null > > > +++ b/elf/tst-align3.c > > > @@ -0,0 +1,37 @@ > > > +/* Check alignment of PT_LOAD segment in a shared library. > > > + Copyright (C) 2021 Free Software Foundation, Inc. > > > + This file is part of the GNU C Library. > > > + > > > + The GNU C Library is free software; you can redistribute it and/or > > > + modify it under the terms of the GNU Lesser General Public > > > + License as published by the Free Software Foundation; either > > > + version 2.1 of the License, or (at your option) any later version. > > > + > > > + The GNU C Library is distributed in the hope that it will be useful, > > > + but WITHOUT ANY WARRANTY; without even the implied warranty of > > > + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU > > > + Lesser General Public License for more details. > > > + > > > + You should have received a copy of the GNU Lesser General Public > > > + License along with the GNU C Library; if not, see > > > + <https://www.gnu.org/licenses/>. */ > > > + > > > +#include <support/check.h> > > > +#include <tst-stack-align.h> > > > + > > > +#define ALIGN 0x200000 > > > + > > > +int bar __attribute__ ((aligned (ALIGN))) = 1; > > > + > > > +extern int do_load_test (void); > > > + > > > +static int > > > +do_test (void) > > > +{ > > > + printf ("bar: %p\n", &bar); > > > + TEST_VERIFY (is_aligned (&bar, ALIGN) == 0); > > > + > > > + return do_load_test (); > > > +} > > > + > > > +#include <support/test-driver.c> > > > diff --git a/elf/tst-alignmod3.c b/elf/tst-alignmod3.c > > > new file mode 100644 > > > index 0000000000..50ec08462c > > > --- /dev/null > > > +++ b/elf/tst-alignmod3.c > > > @@ -0,0 +1,31 @@ > > > +/* Check alignment of PT_LOAD segment in a shared library. > > > + Copyright (C) 2021 Free Software Foundation, Inc. > > > + This file is part of the GNU C Library. > > > + > > > + The GNU C Library is free software; you can redistribute it and/or > > > + modify it under the terms of the GNU Lesser General Public > > > + License as published by the Free Software Foundation; either > > > + version 2.1 of the License, or (at your option) any later version. > > > + > > > + The GNU C Library is distributed in the hope that it will be useful, > > > + but WITHOUT ANY WARRANTY; without even the implied warranty of > > > + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU > > > + Lesser General Public License for more details. > > > + > > > + You should have received a copy of the GNU Lesser General Public > > > + License along with the GNU C Library; if not, see > > > + <https://www.gnu.org/licenses/>. */ > > > + > > > +#include <support/check.h> > > > +#include <tst-stack-align.h> > > > + > > > +#define ALIGN 0x200000 > > > > I think it should cover all possible pagesize we current support. Maybe add > > a comment here or on Makefile about it. > > Here is the v2 patch I added > > /* This should cover all possible page sizes we currently support. */ > > > > + > > > +int foo __attribute__ ((aligned (ALIGN))) = 1; > > > + > > > +void > > > +do_load_test (void) > > > +{ > > > + printf ("foo: %p\n", &foo); > > > + TEST_VERIFY (is_aligned (&foo, ALIGN) == 0); > > > +} > > > > > > > -- > H.J. I am checking in this. -- H.J. [-- Attachment #2: v6-0002-Add-a-testcase-to-check-alignment-of-PT_LOAD-segm.patch --] [-- Type: text/x-patch, Size: 5405 bytes --] From 831eb4d329e9c7f14a70acffe256b79d04c6d004 Mon Sep 17 00:00:00 2001 From: "H.J. Lu" <hjl.tools@gmail.com> Date: Thu, 9 Dec 2021 07:01:33 -0800 Subject: [PATCH v6 2/2] Add a testcase to check alignment of PT_LOAD segment [BZ #28676] --- elf/Makefile | 14 ++++++++++++-- elf/tst-align3.c | 38 ++++++++++++++++++++++++++++++++++++++ elf/tst-alignmod3.c | 32 ++++++++++++++++++++++++++++++++ 3 files changed, 82 insertions(+), 2 deletions(-) create mode 100644 elf/tst-align3.c create mode 100644 elf/tst-alignmod3.c diff --git a/elf/Makefile b/elf/Makefile index d0bb0daa7e..fe42caeb0e 100644 --- a/elf/Makefile +++ b/elf/Makefile @@ -207,7 +207,7 @@ tests += restest1 preloadtest loadfail multiload origtest resolvfail \ tst-tls4 tst-tls5 \ tst-tls10 tst-tls11 tst-tls12 tst-tls13 tst-tls14 tst-tls15 \ tst-tls16 tst-tls17 tst-tls18 tst-tls19 tst-tls-dlinfo \ - tst-align tst-align2 \ + tst-align tst-align2 tst-align3 \ tst-dlmodcount tst-dlopenrpath tst-deep1 \ tst-dlmopen1 tst-dlmopen3 tst-dlmopen4 \ unload3 unload4 unload5 unload6 unload7 unload8 tst-global1 order2 \ @@ -241,6 +241,9 @@ tests-internal += loadtest unload unload2 circleload1 \ tests-container += tst-pldd tst-dlopen-tlsmodid-container \ tst-dlopen-self-container tst-preload-pthread-libc test-srcs = tst-pathopt +ifeq (yes,$(have-fpie)) +tests-pie += tst-align3 +endif selinux-enabled := $(shell cat /selinux/enforce 2> /dev/null) ifneq ($(selinux-enabled),1) tests-execstack-yes = tst-execstack tst-execstack-needed tst-execstack-prog @@ -302,7 +305,7 @@ modules-names = testobj1 testobj2 testobj3 testobj4 testobj5 testobj6 \ circlemod3 circlemod3a \ reldep8mod1 reldep8mod2 reldep8mod3 \ reldep9mod1 reldep9mod2 reldep9mod3 \ - tst-alignmod tst-alignmod2 \ + tst-alignmod tst-alignmod2 tst-alignmod3 \ $(modules-execstack-$(have-z-execstack)) \ tst-dlopenrpathmod tst-deep1mod1 tst-deep1mod2 tst-deep1mod3 \ tst-dlmopen1mod tst-auditmod1 \ @@ -1093,6 +1096,13 @@ CFLAGS-tst-alignmod.c += $(stack-align-test-flags) CFLAGS-tst-alignmod2.c += $(stack-align-test-flags) $(objpfx)tst-align.out: $(objpfx)tst-alignmod.so $(objpfx)tst-align2: $(objpfx)tst-alignmod2.so +$(objpfx)tst-align3: $(objpfx)tst-alignmod3.so +ifeq (yes,$(have-fpie)) +CFLAGS-tst-align3.c += $(PIE-ccflag) +endif +LDFLAGS-tst-align3 += -Wl,-z,max-page-size=0x200000 +LDFLAGS-tst-alignmod3.so += -Wl,-z,max-page-size=0x200000 +$(objpfx)tst-alignmod3.so: $(libsupport) $(objpfx)unload3.out: $(objpfx)unload3mod1.so $(objpfx)unload3mod2.so \ $(objpfx)unload3mod3.so $(objpfx)unload3mod4.so diff --git a/elf/tst-align3.c b/elf/tst-align3.c new file mode 100644 index 0000000000..ac86d623a6 --- /dev/null +++ b/elf/tst-align3.c @@ -0,0 +1,38 @@ +/* Check alignment of PT_LOAD segment in a shared library. + Copyright (C) 2021 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + <https://www.gnu.org/licenses/>. */ + +#include <support/check.h> +#include <tst-stack-align.h> + +/* This should cover all possible page sizes we currently support. */ +#define ALIGN 0x200000 + +int bar __attribute__ ((aligned (ALIGN))) = 1; + +extern int do_load_test (void); + +static int +do_test (void) +{ + printf ("bar: %p\n", &bar); + TEST_VERIFY (is_aligned (&bar, ALIGN) == 0); + + return do_load_test (); +} + +#include <support/test-driver.c> diff --git a/elf/tst-alignmod3.c b/elf/tst-alignmod3.c new file mode 100644 index 0000000000..0d33f2379d --- /dev/null +++ b/elf/tst-alignmod3.c @@ -0,0 +1,32 @@ +/* Check alignment of PT_LOAD segment in a shared library. + Copyright (C) 2021 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + <https://www.gnu.org/licenses/>. */ + +#include <support/check.h> +#include <tst-stack-align.h> + +/* This should cover all possible page sizes we currently support. */ +#define ALIGN 0x200000 + +int foo __attribute__ ((aligned (ALIGN))) = 1; + +void +do_load_test (void) +{ + printf ("foo: %p\n", &foo); + TEST_VERIFY (is_aligned (&foo, ALIGN) == 0); +} -- 2.33.1 ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 2/2] Add a testcase to check alignment of PT_LOAD segment 2021-12-10 18:56 ` H.J. Lu @ 2021-12-10 20:05 ` Adhemerval Zanella 2021-12-10 20:24 ` H.J. Lu 0 siblings, 1 reply; 42+ messages in thread From: Adhemerval Zanella @ 2021-12-10 20:05 UTC (permalink / raw) To: H.J. Lu; +Cc: Rongwei Wang, GNU C Library, Florian Weimer, xuyu, gavin.dg On 10/12/2021 15:56, H.J. Lu wrote: > On Fri, Dec 10, 2021 at 7:41 AM H.J. Lu <hjl.tools@gmail.com> wrote: >> >> On Fri, Dec 10, 2021 at 5:48 AM Adhemerval Zanella >> <adhemerval.zanella@linaro.org> wrote: >>> >>> >>> >>> On 10/12/2021 09:39, Rongwei Wang via Libc-alpha wrote: >>>> From: "H.J. Lu" <hjl.tools@gmail.com> >>>> >>>> This patch adds a testcase for PT_LOAD segment to check it is >>>> properly aligned when the alignment > the page size. >>>> >>>> Signed-off-by: "H.J. Lu" <hjl.tools@gmail.com> >>>> Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com> >>>> --- >>>> elf/Makefile | 14 ++++++++++++-- >>>> elf/tst-align3.c | 37 +++++++++++++++++++++++++++++++++++++ >>>> elf/tst-alignmod3.c | 31 +++++++++++++++++++++++++++++++ >>>> 3 files changed, 80 insertions(+), 2 deletions(-) >>>> create mode 100644 elf/tst-align3.c >>>> create mode 100644 elf/tst-alignmod3.c >>>> >>>> diff --git a/elf/Makefile b/elf/Makefile >>>> index ef36008673..b16128ac8b 100644 >>>> --- a/elf/Makefile >>>> +++ b/elf/Makefile >>>> @@ -207,7 +207,7 @@ tests += restest1 preloadtest loadfail multiload origtest resolvfail \ >>>> tst-tls4 tst-tls5 \ >>>> tst-tls10 tst-tls11 tst-tls12 tst-tls13 tst-tls14 tst-tls15 \ >>>> tst-tls16 tst-tls17 tst-tls18 tst-tls19 tst-tls-dlinfo \ >>>> - tst-align tst-align2 \ >>>> + tst-align tst-align2 tst-align3 \ >>>> tst-dlmodcount tst-dlopenrpath tst-deep1 \ >>>> tst-dlmopen1 tst-dlmopen3 tst-dlmopen4 \ >>>> unload3 unload4 unload5 unload6 unload7 unload8 tst-global1 order2 \ >>>> @@ -241,6 +241,9 @@ tests-internal += loadtest unload unload2 circleload1 \ >>>> tests-container += tst-pldd tst-dlopen-tlsmodid-container \ >>>> tst-dlopen-self-container tst-preload-pthread-libc >>>> test-srcs = tst-pathopt >>>> +ifeq (yes,$(have-fpie)) >>>> +tests-pie += tst-align3 >>>> +endif >>>> selinux-enabled := $(shell cat /selinux/enforce 2> /dev/null) >>>> ifneq ($(selinux-enabled),1) >>>> tests-execstack-yes = tst-execstack tst-execstack-needed tst-execstack-prog >>>> @@ -302,7 +305,7 @@ modules-names = testobj1 testobj2 testobj3 testobj4 testobj5 testobj6 \ >>>> circlemod3 circlemod3a \ >>>> reldep8mod1 reldep8mod2 reldep8mod3 \ >>>> reldep9mod1 reldep9mod2 reldep9mod3 \ >>>> - tst-alignmod tst-alignmod2 \ >>>> + tst-alignmod tst-alignmod2 tst-alignmod3 \ >>>> $(modules-execstack-$(have-z-execstack)) \ >>>> tst-dlopenrpathmod tst-deep1mod1 tst-deep1mod2 tst-deep1mod3 \ >>>> tst-dlmopen1mod tst-auditmod1 \ >>>> @@ -1088,6 +1091,13 @@ CFLAGS-tst-alignmod.c += $(stack-align-test-flags) >>>> CFLAGS-tst-alignmod2.c += $(stack-align-test-flags) >>>> $(objpfx)tst-align.out: $(objpfx)tst-alignmod.so >>>> $(objpfx)tst-align2: $(objpfx)tst-alignmod2.so >>>> +$(objpfx)tst-align3: $(objpfx)tst-alignmod3.so >>>> +ifeq (yes,$(have-fpie)) >>>> +CFLAGS-tst-align3.c += $(PIE-ccflag) >>>> +endif >>>> +LDFLAGS-tst-align3 += -Wl,-z,max-page-size=0x200000 >>>> +LDFLAGS-tst-alignmod3.so += -Wl,-z,max-page-size=0x200000 >>>> +$(objpfx)tst-alignmod3.so: $(libsupport) >>>> >>>> $(objpfx)unload3.out: $(objpfx)unload3mod1.so $(objpfx)unload3mod2.so \ >>>> $(objpfx)unload3mod3.so $(objpfx)unload3mod4.so >>>> diff --git a/elf/tst-align3.c b/elf/tst-align3.c >>>> new file mode 100644 >>>> index 0000000000..5697c0bbaf >>>> --- /dev/null >>>> +++ b/elf/tst-align3.c >>>> @@ -0,0 +1,37 @@ >>>> +/* Check alignment of PT_LOAD segment in a shared library. >>>> + Copyright (C) 2021 Free Software Foundation, Inc. >>>> + This file is part of the GNU C Library. >>>> + >>>> + The GNU C Library is free software; you can redistribute it and/or >>>> + modify it under the terms of the GNU Lesser General Public >>>> + License as published by the Free Software Foundation; either >>>> + version 2.1 of the License, or (at your option) any later version. >>>> + >>>> + The GNU C Library is distributed in the hope that it will be useful, >>>> + but WITHOUT ANY WARRANTY; without even the implied warranty of >>>> + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU >>>> + Lesser General Public License for more details. >>>> + >>>> + You should have received a copy of the GNU Lesser General Public >>>> + License along with the GNU C Library; if not, see >>>> + <https://www.gnu.org/licenses/>. */ >>>> + >>>> +#include <support/check.h> >>>> +#include <tst-stack-align.h> >>>> + >>>> +#define ALIGN 0x200000 >>>> + >>>> +int bar __attribute__ ((aligned (ALIGN))) = 1; >>>> + >>>> +extern int do_load_test (void); >>>> + >>>> +static int >>>> +do_test (void) >>>> +{ >>>> + printf ("bar: %p\n", &bar); >>>> + TEST_VERIFY (is_aligned (&bar, ALIGN) == 0); >>>> + >>>> + return do_load_test (); >>>> +} >>>> + >>>> +#include <support/test-driver.c> >>>> diff --git a/elf/tst-alignmod3.c b/elf/tst-alignmod3.c >>>> new file mode 100644 >>>> index 0000000000..50ec08462c >>>> --- /dev/null >>>> +++ b/elf/tst-alignmod3.c >>>> @@ -0,0 +1,31 @@ >>>> +/* Check alignment of PT_LOAD segment in a shared library. >>>> + Copyright (C) 2021 Free Software Foundation, Inc. >>>> + This file is part of the GNU C Library. >>>> + >>>> + The GNU C Library is free software; you can redistribute it and/or >>>> + modify it under the terms of the GNU Lesser General Public >>>> + License as published by the Free Software Foundation; either >>>> + version 2.1 of the License, or (at your option) any later version. >>>> + >>>> + The GNU C Library is distributed in the hope that it will be useful, >>>> + but WITHOUT ANY WARRANTY; without even the implied warranty of >>>> + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU >>>> + Lesser General Public License for more details. >>>> + >>>> + You should have received a copy of the GNU Lesser General Public >>>> + License along with the GNU C Library; if not, see >>>> + <https://www.gnu.org/licenses/>. */ >>>> + >>>> +#include <support/check.h> >>>> +#include <tst-stack-align.h> >>>> + >>>> +#define ALIGN 0x200000 >>> >>> I think it should cover all possible pagesize we current support. Maybe add >>> a comment here or on Makefile about it. >> >> Here is the v2 patch I added >> >> /* This should cover all possible page sizes we currently support. */ >> >>>> + >>>> +int foo __attribute__ ((aligned (ALIGN))) = 1; >>>> + >>>> +void >>>> +do_load_test (void) >>>> +{ >>>> + printf ("foo: %p\n", &foo); >>>> + TEST_VERIFY (is_aligned (&foo, ALIGN) == 0); >>>> +} >>>> >> >> >> >> -- >> H.J. > > I am checking in this. > It fails to build for alpha: /tmp/cc0teHmf.s: Assembler messages: /tmp/cc0teHmf.s:206: Error: Alignment too large: 16. assumed make[2]: *** [../o-iterator.mk:9: /home/azanella/Projects/glibc/build/alpha-linux-gnu/elf/tst-align3.o] Error 1 make[2]: Leaving directory '/home/azanella/Projects/glibc/glibc-git/elf' make[1]: *** [Makefile:483: elf/tests] Error 2 make[1]: Leaving directory '/home/azanella/Projects/glibc/glibc-git' make: *** [Makefile:9: check] Error 2 ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 2/2] Add a testcase to check alignment of PT_LOAD segment 2021-12-10 20:05 ` Adhemerval Zanella @ 2021-12-10 20:24 ` H.J. Lu 2021-12-10 21:34 ` Adhemerval Zanella 0 siblings, 1 reply; 42+ messages in thread From: H.J. Lu @ 2021-12-10 20:24 UTC (permalink / raw) To: Adhemerval Zanella Cc: Rongwei Wang, GNU C Library, Florian Weimer, xuyu, gavin.dg On Fri, Dec 10, 2021 at 12:05 PM Adhemerval Zanella <adhemerval.zanella@linaro.org> wrote: > > > > On 10/12/2021 15:56, H.J. Lu wrote: > > On Fri, Dec 10, 2021 at 7:41 AM H.J. Lu <hjl.tools@gmail.com> wrote: > >> > >> On Fri, Dec 10, 2021 at 5:48 AM Adhemerval Zanella > >> <adhemerval.zanella@linaro.org> wrote: > >>> > >>> > >>> > >>> On 10/12/2021 09:39, Rongwei Wang via Libc-alpha wrote: > >>>> From: "H.J. Lu" <hjl.tools@gmail.com> > >>>> > >>>> This patch adds a testcase for PT_LOAD segment to check it is > >>>> properly aligned when the alignment > the page size. > >>>> > >>>> Signed-off-by: "H.J. Lu" <hjl.tools@gmail.com> > >>>> Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com> > >>>> --- > >>>> elf/Makefile | 14 ++++++++++++-- > >>>> elf/tst-align3.c | 37 +++++++++++++++++++++++++++++++++++++ > >>>> elf/tst-alignmod3.c | 31 +++++++++++++++++++++++++++++++ > >>>> 3 files changed, 80 insertions(+), 2 deletions(-) > >>>> create mode 100644 elf/tst-align3.c > >>>> create mode 100644 elf/tst-alignmod3.c > >>>> > >>>> diff --git a/elf/Makefile b/elf/Makefile > >>>> index ef36008673..b16128ac8b 100644 > >>>> --- a/elf/Makefile > >>>> +++ b/elf/Makefile > >>>> @@ -207,7 +207,7 @@ tests += restest1 preloadtest loadfail multiload origtest resolvfail \ > >>>> tst-tls4 tst-tls5 \ > >>>> tst-tls10 tst-tls11 tst-tls12 tst-tls13 tst-tls14 tst-tls15 \ > >>>> tst-tls16 tst-tls17 tst-tls18 tst-tls19 tst-tls-dlinfo \ > >>>> - tst-align tst-align2 \ > >>>> + tst-align tst-align2 tst-align3 \ > >>>> tst-dlmodcount tst-dlopenrpath tst-deep1 \ > >>>> tst-dlmopen1 tst-dlmopen3 tst-dlmopen4 \ > >>>> unload3 unload4 unload5 unload6 unload7 unload8 tst-global1 order2 \ > >>>> @@ -241,6 +241,9 @@ tests-internal += loadtest unload unload2 circleload1 \ > >>>> tests-container += tst-pldd tst-dlopen-tlsmodid-container \ > >>>> tst-dlopen-self-container tst-preload-pthread-libc > >>>> test-srcs = tst-pathopt > >>>> +ifeq (yes,$(have-fpie)) > >>>> +tests-pie += tst-align3 > >>>> +endif > >>>> selinux-enabled := $(shell cat /selinux/enforce 2> /dev/null) > >>>> ifneq ($(selinux-enabled),1) > >>>> tests-execstack-yes = tst-execstack tst-execstack-needed tst-execstack-prog > >>>> @@ -302,7 +305,7 @@ modules-names = testobj1 testobj2 testobj3 testobj4 testobj5 testobj6 \ > >>>> circlemod3 circlemod3a \ > >>>> reldep8mod1 reldep8mod2 reldep8mod3 \ > >>>> reldep9mod1 reldep9mod2 reldep9mod3 \ > >>>> - tst-alignmod tst-alignmod2 \ > >>>> + tst-alignmod tst-alignmod2 tst-alignmod3 \ > >>>> $(modules-execstack-$(have-z-execstack)) \ > >>>> tst-dlopenrpathmod tst-deep1mod1 tst-deep1mod2 tst-deep1mod3 \ > >>>> tst-dlmopen1mod tst-auditmod1 \ > >>>> @@ -1088,6 +1091,13 @@ CFLAGS-tst-alignmod.c += $(stack-align-test-flags) > >>>> CFLAGS-tst-alignmod2.c += $(stack-align-test-flags) > >>>> $(objpfx)tst-align.out: $(objpfx)tst-alignmod.so > >>>> $(objpfx)tst-align2: $(objpfx)tst-alignmod2.so > >>>> +$(objpfx)tst-align3: $(objpfx)tst-alignmod3.so > >>>> +ifeq (yes,$(have-fpie)) > >>>> +CFLAGS-tst-align3.c += $(PIE-ccflag) > >>>> +endif > >>>> +LDFLAGS-tst-align3 += -Wl,-z,max-page-size=0x200000 > >>>> +LDFLAGS-tst-alignmod3.so += -Wl,-z,max-page-size=0x200000 > >>>> +$(objpfx)tst-alignmod3.so: $(libsupport) > >>>> > >>>> $(objpfx)unload3.out: $(objpfx)unload3mod1.so $(objpfx)unload3mod2.so \ > >>>> $(objpfx)unload3mod3.so $(objpfx)unload3mod4.so > >>>> diff --git a/elf/tst-align3.c b/elf/tst-align3.c > >>>> new file mode 100644 > >>>> index 0000000000..5697c0bbaf > >>>> --- /dev/null > >>>> +++ b/elf/tst-align3.c > >>>> @@ -0,0 +1,37 @@ > >>>> +/* Check alignment of PT_LOAD segment in a shared library. > >>>> + Copyright (C) 2021 Free Software Foundation, Inc. > >>>> + This file is part of the GNU C Library. > >>>> + > >>>> + The GNU C Library is free software; you can redistribute it and/or > >>>> + modify it under the terms of the GNU Lesser General Public > >>>> + License as published by the Free Software Foundation; either > >>>> + version 2.1 of the License, or (at your option) any later version. > >>>> + > >>>> + The GNU C Library is distributed in the hope that it will be useful, > >>>> + but WITHOUT ANY WARRANTY; without even the implied warranty of > >>>> + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU > >>>> + Lesser General Public License for more details. > >>>> + > >>>> + You should have received a copy of the GNU Lesser General Public > >>>> + License along with the GNU C Library; if not, see > >>>> + <https://www.gnu.org/licenses/>. */ > >>>> + > >>>> +#include <support/check.h> > >>>> +#include <tst-stack-align.h> > >>>> + > >>>> +#define ALIGN 0x200000 > >>>> + > >>>> +int bar __attribute__ ((aligned (ALIGN))) = 1; > >>>> + > >>>> +extern int do_load_test (void); > >>>> + > >>>> +static int > >>>> +do_test (void) > >>>> +{ > >>>> + printf ("bar: %p\n", &bar); > >>>> + TEST_VERIFY (is_aligned (&bar, ALIGN) == 0); > >>>> + > >>>> + return do_load_test (); > >>>> +} > >>>> + > >>>> +#include <support/test-driver.c> > >>>> diff --git a/elf/tst-alignmod3.c b/elf/tst-alignmod3.c > >>>> new file mode 100644 > >>>> index 0000000000..50ec08462c > >>>> --- /dev/null > >>>> +++ b/elf/tst-alignmod3.c > >>>> @@ -0,0 +1,31 @@ > >>>> +/* Check alignment of PT_LOAD segment in a shared library. > >>>> + Copyright (C) 2021 Free Software Foundation, Inc. > >>>> + This file is part of the GNU C Library. > >>>> + > >>>> + The GNU C Library is free software; you can redistribute it and/or > >>>> + modify it under the terms of the GNU Lesser General Public > >>>> + License as published by the Free Software Foundation; either > >>>> + version 2.1 of the License, or (at your option) any later version. > >>>> + > >>>> + The GNU C Library is distributed in the hope that it will be useful, > >>>> + but WITHOUT ANY WARRANTY; without even the implied warranty of > >>>> + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU > >>>> + Lesser General Public License for more details. > >>>> + > >>>> + You should have received a copy of the GNU Lesser General Public > >>>> + License along with the GNU C Library; if not, see > >>>> + <https://www.gnu.org/licenses/>. */ > >>>> + > >>>> +#include <support/check.h> > >>>> +#include <tst-stack-align.h> > >>>> + > >>>> +#define ALIGN 0x200000 > >>> > >>> I think it should cover all possible pagesize we current support. Maybe add > >>> a comment here or on Makefile about it. > >> > >> Here is the v2 patch I added > >> > >> /* This should cover all possible page sizes we currently support. */ > >> > >>>> + > >>>> +int foo __attribute__ ((aligned (ALIGN))) = 1; > >>>> + > >>>> +void > >>>> +do_load_test (void) > >>>> +{ > >>>> + printf ("foo: %p\n", &foo); > >>>> + TEST_VERIFY (is_aligned (&foo, ALIGN) == 0); > >>>> +} > >>>> > >> > >> > >> > >> -- > >> H.J. > > > > I am checking in this. > > > > It fails to build for alpha: > > /tmp/cc0teHmf.s: Assembler messages: > /tmp/cc0teHmf.s:206: Error: Alignment too large: 16. assumed > make[2]: *** [../o-iterator.mk:9: /home/azanella/Projects/glibc/build/alpha-linux-gnu/elf/tst-align3.o] Error 1 > make[2]: Leaving directory '/home/azanella/Projects/glibc/glibc-git/elf' > make[1]: *** [Makefile:483: elf/tests] Error 2 > make[1]: Leaving directory '/home/azanella/Projects/glibc/glibc-git' > make: *** [Makefile:9: check] Error 2 The maximum alignment for alpha is 0x10000. I will add a configure-time check. -- H.J. ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 2/2] Add a testcase to check alignment of PT_LOAD segment 2021-12-10 20:24 ` H.J. Lu @ 2021-12-10 21:34 ` Adhemerval Zanella 0 siblings, 0 replies; 42+ messages in thread From: Adhemerval Zanella @ 2021-12-10 21:34 UTC (permalink / raw) To: H.J. Lu; +Cc: Rongwei Wang, GNU C Library, Florian Weimer, xuyu, gavin.dg On 10/12/2021 17:24, H.J. Lu wrote: > On Fri, Dec 10, 2021 at 12:05 PM Adhemerval Zanella > <adhemerval.zanella@linaro.org> wrote: >> >> >> >> On 10/12/2021 15:56, H.J. Lu wrote: >>> On Fri, Dec 10, 2021 at 7:41 AM H.J. Lu <hjl.tools@gmail.com> wrote: >>>> >>>> On Fri, Dec 10, 2021 at 5:48 AM Adhemerval Zanella >>>> <adhemerval.zanella@linaro.org> wrote: >>>>> >>>>> >>>>> >>>>> On 10/12/2021 09:39, Rongwei Wang via Libc-alpha wrote: >>>>>> From: "H.J. Lu" <hjl.tools@gmail.com> >>>>>> >>>>>> This patch adds a testcase for PT_LOAD segment to check it is >>>>>> properly aligned when the alignment > the page size. >>>>>> >>>>>> Signed-off-by: "H.J. Lu" <hjl.tools@gmail.com> >>>>>> Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com> >>>>>> --- >>>>>> elf/Makefile | 14 ++++++++++++-- >>>>>> elf/tst-align3.c | 37 +++++++++++++++++++++++++++++++++++++ >>>>>> elf/tst-alignmod3.c | 31 +++++++++++++++++++++++++++++++ >>>>>> 3 files changed, 80 insertions(+), 2 deletions(-) >>>>>> create mode 100644 elf/tst-align3.c >>>>>> create mode 100644 elf/tst-alignmod3.c >>>>>> >>>>>> diff --git a/elf/Makefile b/elf/Makefile >>>>>> index ef36008673..b16128ac8b 100644 >>>>>> --- a/elf/Makefile >>>>>> +++ b/elf/Makefile >>>>>> @@ -207,7 +207,7 @@ tests += restest1 preloadtest loadfail multiload origtest resolvfail \ >>>>>> tst-tls4 tst-tls5 \ >>>>>> tst-tls10 tst-tls11 tst-tls12 tst-tls13 tst-tls14 tst-tls15 \ >>>>>> tst-tls16 tst-tls17 tst-tls18 tst-tls19 tst-tls-dlinfo \ >>>>>> - tst-align tst-align2 \ >>>>>> + tst-align tst-align2 tst-align3 \ >>>>>> tst-dlmodcount tst-dlopenrpath tst-deep1 \ >>>>>> tst-dlmopen1 tst-dlmopen3 tst-dlmopen4 \ >>>>>> unload3 unload4 unload5 unload6 unload7 unload8 tst-global1 order2 \ >>>>>> @@ -241,6 +241,9 @@ tests-internal += loadtest unload unload2 circleload1 \ >>>>>> tests-container += tst-pldd tst-dlopen-tlsmodid-container \ >>>>>> tst-dlopen-self-container tst-preload-pthread-libc >>>>>> test-srcs = tst-pathopt >>>>>> +ifeq (yes,$(have-fpie)) >>>>>> +tests-pie += tst-align3 >>>>>> +endif >>>>>> selinux-enabled := $(shell cat /selinux/enforce 2> /dev/null) >>>>>> ifneq ($(selinux-enabled),1) >>>>>> tests-execstack-yes = tst-execstack tst-execstack-needed tst-execstack-prog >>>>>> @@ -302,7 +305,7 @@ modules-names = testobj1 testobj2 testobj3 testobj4 testobj5 testobj6 \ >>>>>> circlemod3 circlemod3a \ >>>>>> reldep8mod1 reldep8mod2 reldep8mod3 \ >>>>>> reldep9mod1 reldep9mod2 reldep9mod3 \ >>>>>> - tst-alignmod tst-alignmod2 \ >>>>>> + tst-alignmod tst-alignmod2 tst-alignmod3 \ >>>>>> $(modules-execstack-$(have-z-execstack)) \ >>>>>> tst-dlopenrpathmod tst-deep1mod1 tst-deep1mod2 tst-deep1mod3 \ >>>>>> tst-dlmopen1mod tst-auditmod1 \ >>>>>> @@ -1088,6 +1091,13 @@ CFLAGS-tst-alignmod.c += $(stack-align-test-flags) >>>>>> CFLAGS-tst-alignmod2.c += $(stack-align-test-flags) >>>>>> $(objpfx)tst-align.out: $(objpfx)tst-alignmod.so >>>>>> $(objpfx)tst-align2: $(objpfx)tst-alignmod2.so >>>>>> +$(objpfx)tst-align3: $(objpfx)tst-alignmod3.so >>>>>> +ifeq (yes,$(have-fpie)) >>>>>> +CFLAGS-tst-align3.c += $(PIE-ccflag) >>>>>> +endif >>>>>> +LDFLAGS-tst-align3 += -Wl,-z,max-page-size=0x200000 >>>>>> +LDFLAGS-tst-alignmod3.so += -Wl,-z,max-page-size=0x200000 >>>>>> +$(objpfx)tst-alignmod3.so: $(libsupport) >>>>>> >>>>>> $(objpfx)unload3.out: $(objpfx)unload3mod1.so $(objpfx)unload3mod2.so \ >>>>>> $(objpfx)unload3mod3.so $(objpfx)unload3mod4.so >>>>>> diff --git a/elf/tst-align3.c b/elf/tst-align3.c >>>>>> new file mode 100644 >>>>>> index 0000000000..5697c0bbaf >>>>>> --- /dev/null >>>>>> +++ b/elf/tst-align3.c >>>>>> @@ -0,0 +1,37 @@ >>>>>> +/* Check alignment of PT_LOAD segment in a shared library. >>>>>> + Copyright (C) 2021 Free Software Foundation, Inc. >>>>>> + This file is part of the GNU C Library. >>>>>> + >>>>>> + The GNU C Library is free software; you can redistribute it and/or >>>>>> + modify it under the terms of the GNU Lesser General Public >>>>>> + License as published by the Free Software Foundation; either >>>>>> + version 2.1 of the License, or (at your option) any later version. >>>>>> + >>>>>> + The GNU C Library is distributed in the hope that it will be useful, >>>>>> + but WITHOUT ANY WARRANTY; without even the implied warranty of >>>>>> + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU >>>>>> + Lesser General Public License for more details. >>>>>> + >>>>>> + You should have received a copy of the GNU Lesser General Public >>>>>> + License along with the GNU C Library; if not, see >>>>>> + <https://www.gnu.org/licenses/>. */ >>>>>> + >>>>>> +#include <support/check.h> >>>>>> +#include <tst-stack-align.h> >>>>>> + >>>>>> +#define ALIGN 0x200000 >>>>>> + >>>>>> +int bar __attribute__ ((aligned (ALIGN))) = 1; >>>>>> + >>>>>> +extern int do_load_test (void); >>>>>> + >>>>>> +static int >>>>>> +do_test (void) >>>>>> +{ >>>>>> + printf ("bar: %p\n", &bar); >>>>>> + TEST_VERIFY (is_aligned (&bar, ALIGN) == 0); >>>>>> + >>>>>> + return do_load_test (); >>>>>> +} >>>>>> + >>>>>> +#include <support/test-driver.c> >>>>>> diff --git a/elf/tst-alignmod3.c b/elf/tst-alignmod3.c >>>>>> new file mode 100644 >>>>>> index 0000000000..50ec08462c >>>>>> --- /dev/null >>>>>> +++ b/elf/tst-alignmod3.c >>>>>> @@ -0,0 +1,31 @@ >>>>>> +/* Check alignment of PT_LOAD segment in a shared library. >>>>>> + Copyright (C) 2021 Free Software Foundation, Inc. >>>>>> + This file is part of the GNU C Library. >>>>>> + >>>>>> + The GNU C Library is free software; you can redistribute it and/or >>>>>> + modify it under the terms of the GNU Lesser General Public >>>>>> + License as published by the Free Software Foundation; either >>>>>> + version 2.1 of the License, or (at your option) any later version. >>>>>> + >>>>>> + The GNU C Library is distributed in the hope that it will be useful, >>>>>> + but WITHOUT ANY WARRANTY; without even the implied warranty of >>>>>> + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU >>>>>> + Lesser General Public License for more details. >>>>>> + >>>>>> + You should have received a copy of the GNU Lesser General Public >>>>>> + License along with the GNU C Library; if not, see >>>>>> + <https://www.gnu.org/licenses/>. */ >>>>>> + >>>>>> +#include <support/check.h> >>>>>> +#include <tst-stack-align.h> >>>>>> + >>>>>> +#define ALIGN 0x200000 >>>>> >>>>> I think it should cover all possible pagesize we current support. Maybe add >>>>> a comment here or on Makefile about it. >>>> >>>> Here is the v2 patch I added >>>> >>>> /* This should cover all possible page sizes we currently support. */ >>>> >>>>>> + >>>>>> +int foo __attribute__ ((aligned (ALIGN))) = 1; >>>>>> + >>>>>> +void >>>>>> +do_load_test (void) >>>>>> +{ >>>>>> + printf ("foo: %p\n", &foo); >>>>>> + TEST_VERIFY (is_aligned (&foo, ALIGN) == 0); >>>>>> +} >>>>>> >>>> >>>> >>>> >>>> -- >>>> H.J. >>> >>> I am checking in this. >>> >> >> It fails to build for alpha: >> >> /tmp/cc0teHmf.s: Assembler messages: >> /tmp/cc0teHmf.s:206: Error: Alignment too large: 16. assumed >> make[2]: *** [../o-iterator.mk:9: /home/azanella/Projects/glibc/build/alpha-linux-gnu/elf/tst-align3.o] Error 1 >> make[2]: Leaving directory '/home/azanella/Projects/glibc/glibc-git/elf' >> make[1]: *** [Makefile:483: elf/tests] Error 2 >> make[1]: Leaving directory '/home/azanella/Projects/glibc/glibc-git' >> make: *** [Makefile:9: check] Error 2 > > The maximum alignment for alpha is 0x10000. I will add a configure-time > check. > microblaze also fails with: tst-align3.c:25:1: error: requested alignment ‘2097152’ exceeds object file maximum 32768 25 | int bar __attribute__ ((aligned (ALIGN))) = 1; | ^~~ And nios2 with: /tmp/ccu9QZLs.s: Assembler messages: /tmp/ccu9QZLs.s:285: Error: Alignment too large: 15. assumed ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 0/2] fix p_align on PT_LOAD segment in DSO isn't honored 2021-12-10 12:39 ` [PATCH v5 0/2] fix p_align on PT_LOAD segment in DSO isn't honored Rongwei Wang 2021-12-10 12:39 ` [PATCH v5 1/2] elf: Properly align PT_LOAD segments Rongwei Wang 2021-12-10 12:39 ` [PATCH v5 2/2] Add a testcase to check alignment of PT_LOAD segment Rongwei Wang @ 2021-12-10 13:13 ` H.J. Lu 2021-12-10 13:58 ` Rongwei Wang 2 siblings, 1 reply; 42+ messages in thread From: H.J. Lu @ 2021-12-10 13:13 UTC (permalink / raw) To: Rongwei Wang; +Cc: GNU C Library, Florian Weimer, xuyu, gavin.dg On Fri, Dec 10, 2021 at 4:39 AM Rongwei Wang <rongwei.wang@linux.alibaba.com> wrote: > > Hi > > This patch mainly to fix a reported bug: > > "p_align on PT_LOAD segment in DSO isn't honored" > https://sourceware.org/bugzilla/show_bug.cgi?id=28676 > > A testcase had been builded by H.J.Lu: > https://sourceware.org/bugzilla/attachment.cgi?id=13838 > > And in Patch 1/1, I also show a simple testcase which > modified from H.J.Lu. > > And a similar bug in ELF binary was also reported: > https://bugzilla.kernel.org/show_bug.cgi?id=215275 I submitted a kernel patch: https://lore.kernel.org/lkml/20211209174052.370537-1-hjl.tools@gmail.com/T/ Hopefully, it will be landed soon. > A related fix patch could been found: > https://lore.kernel.org/linux-mm/20211009092658.59665-4-rongwei.wang@linux.alibaba.com/ > Originally, we send this patch for introducing .text > hugepages, was not aware of it's bug. And now, I am > not sure whether kernel maintainer will regards it as > a bug. But it deserved try again. > > Thanks. > > Changelog: > > v4 -> v5 > - Patch "Add a testcase to check alignment of PT_LOAD segment" > add new testcase for PT_LOAD segment > - Patch "elf: Properly align PT_LOAD segments" > fix map_start to use map_start_aligned when second mmap failed > > v3 -> v4 > - Patch "elf: Properly align PT_LOAD segments" > Call unmap when the second mmap fails. > > v2 -> v3 > - Patch "elf: Properly align PT_LOAD segments" > move mapalign into 'struct loadcmd' > fix some coding style > > RFC/v1 -> v2 > > - Patch "elf: align the mapping address of LOAD segments with p_align" > fix coding format and add testcase in commit. > > RFC link: > https://patchwork.sourceware.org/project/glibc/patch/20211204045848.71105-2-rongwei.wang@linux.alibaba.com/ > > H.J. Lu (1): > Add a testcase to check alignment of PT_LOAD segment > > Rongwei Wang (1): > elf: Properly align PT_LOAD segments > > elf/Makefile | 14 +++++++++++-- > elf/dl-load.c | 1 + > elf/dl-load.h | 2 +- > elf/dl-map-segments.h | 49 +++++++++++++++++++++++++++++++++++++++---- > elf/tst-align3.c | 37 ++++++++++++++++++++++++++++++++ > elf/tst-alignmod3.c | 31 +++++++++++++++++++++++++++ > 6 files changed, 127 insertions(+), 7 deletions(-) > create mode 100644 elf/tst-align3.c > create mode 100644 elf/tst-alignmod3.c > > -- > 2.27.0 > LGTM. Reviewed-by: H.J. Lu <hjl.tools@gmail.com> Can you check it in? Thanks. -- H.J. ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v5 0/2] fix p_align on PT_LOAD segment in DSO isn't honored 2021-12-10 13:13 ` [PATCH v5 0/2] fix p_align on PT_LOAD segment in DSO isn't honored H.J. Lu @ 2021-12-10 13:58 ` Rongwei Wang 0 siblings, 0 replies; 42+ messages in thread From: Rongwei Wang @ 2021-12-10 13:58 UTC (permalink / raw) To: H.J. Lu; +Cc: GNU C Library, Florian Weimer, xuyu, gavin.dg Hi On 12/10/21 9:13 PM, H.J. Lu wrote: > On Fri, Dec 10, 2021 at 4:39 AM Rongwei Wang > <rongwei.wang@linux.alibaba.com> wrote: >> >> Hi >> >> This patch mainly to fix a reported bug: >> >> "p_align on PT_LOAD segment in DSO isn't honored" >> https://sourceware.org/bugzilla/show_bug.cgi?id=28676 >> >> A testcase had been builded by H.J.Lu: >> https://sourceware.org/bugzilla/attachment.cgi?id=13838 >> >> And in Patch 1/1, I also show a simple testcase which >> modified from H.J.Lu. >> >> And a similar bug in ELF binary was also reported: >> https://bugzilla.kernel.org/show_bug.cgi?id=215275 > > I submitted a kernel patch: > > https://lore.kernel.org/lkml/20211209174052.370537-1-hjl.tools@gmail.com/T/ > > Hopefully, it will be landed soon. Nice! And if available, you can CC my email. And I think this patch can help us to introduce .text hugepages transparently. > >> A related fix patch could been found: >> https://lore.kernel.org/linux-mm/20211009092658.59665-4-rongwei.wang@linux.alibaba.com/ >> Originally, we send this patch for introducing .text >> hugepages, was not aware of it's bug. And now, I am >> not sure whether kernel maintainer will regards it as >> a bug. But it deserved try again. >> >> Thanks. >> >> Changelog: >> >> v4 -> v5 >> - Patch "Add a testcase to check alignment of PT_LOAD segment" >> add new testcase for PT_LOAD segment >> - Patch "elf: Properly align PT_LOAD segments" >> fix map_start to use map_start_aligned when second mmap failed >> >> v3 -> v4 >> - Patch "elf: Properly align PT_LOAD segments" >> Call unmap when the second mmap fails. >> >> v2 -> v3 >> - Patch "elf: Properly align PT_LOAD segments" >> move mapalign into 'struct loadcmd' >> fix some coding style >> >> RFC/v1 -> v2 >> >> - Patch "elf: align the mapping address of LOAD segments with p_align" >> fix coding format and add testcase in commit. >> >> RFC link: >> https://patchwork.sourceware.org/project/glibc/patch/20211204045848.71105-2-rongwei.wang@linux.alibaba.com/ >> >> H.J. Lu (1): >> Add a testcase to check alignment of PT_LOAD segment >> >> Rongwei Wang (1): >> elf: Properly align PT_LOAD segments >> >> elf/Makefile | 14 +++++++++++-- >> elf/dl-load.c | 1 + >> elf/dl-load.h | 2 +- >> elf/dl-map-segments.h | 49 +++++++++++++++++++++++++++++++++++++++---- >> elf/tst-align3.c | 37 ++++++++++++++++++++++++++++++++ >> elf/tst-alignmod3.c | 31 +++++++++++++++++++++++++++ >> 6 files changed, 127 insertions(+), 7 deletions(-) >> create mode 100644 elf/tst-align3.c >> create mode 100644 elf/tst-alignmod3.c >> >> -- >> 2.27.0 >> > > LGTM. > > Reviewed-by: H.J. Lu <hjl.tools@gmail.com> > > Can you check it in? I have some doubts. I had subscribed linux-kernel mail list long time, but I can not find your patch. Do you have received any review mail? Tnanks. > > Thanks. > > -- > H.J. > ^ permalink raw reply [flat|nested] 42+ messages in thread
* [PATCH v6 0/2] fix p_align on PT_LOAD segment in DSO isn't honored 2021-12-04 4:58 [PATCH RFC 0/1] make ld.so map .text LOAD ssegments and aligned by p_align Rongwei Wang 2021-12-04 4:58 ` [PATCH RFC 1/1] elf: align the mapping address of LOAD segments with p_align Rongwei Wang 2021-12-10 12:39 ` [PATCH v5 0/2] fix p_align on PT_LOAD segment in DSO isn't honored Rongwei Wang @ 2021-12-13 2:51 ` Rongwei Wang 2021-12-13 2:51 ` [PATCH v6 1/2] elf: Properly align PT_LOAD segments [BZ #28676] Rongwei Wang ` (2 more replies) 2 siblings, 3 replies; 42+ messages in thread From: Rongwei Wang @ 2021-12-13 2:51 UTC (permalink / raw) To: libc-alpha, hjl.tools, fweimer, adhemerval.zanella; +Cc: xuyu, gavin.dg Hi This patch mainly to fix a reported bug: "p_align on PT_LOAD segment in DSO isn't honored" https://sourceware.org/bugzilla/show_bug.cgi?id=28676 Patch 1/1 is a simple testcase which modified from H.J.Lu. Thanks. Changelog: v5 -> v6 - Patch "Add a testcase to check alignment of PT_LOAD segment" add some comments - Patch "elf: Properly align PT_LOAD segments" update copyright v4 -> v5 - Patch "Add a testcase to check alignment of PT_LOAD segment" add new testcase for PT_LOAD segment - Patch "elf: Properly align PT_LOAD segments" fix map_start to use map_start_aligned when second mmap failed v3 -> v4 - Patch "elf: Properly align PT_LOAD segments" Call unmap when the second mmap fails. v2 -> v3 - Patch "elf: Properly align PT_LOAD segments" move mapalign into 'struct loadcmd' fix some coding style RFC/v1 -> v2 - Patch "elf: align the mapping address of LOAD segments with p_align" fix coding format and add testcase in commit. RFC link: https://patchwork.sourceware.org/project/glibc/patch/20211204045848.71105-2-rongwei.wang@linux.alibaba.com/ H.J. Lu (1): Add a testcase to check alignment of PT_LOAD segment Rongwei Wang (1): elf: Properly align PT_LOAD segments elf/Makefile | 14 +++++++++++-- elf/dl-load.c | 1 + elf/dl-load.h | 2 +- elf/dl-map-segments.h | 49 +++++++++++++++++++++++++++++++++++++++---- elf/tst-align3.c | 37 ++++++++++++++++++++++++++++++++ elf/tst-alignmod3.c | 31 +++++++++++++++++++++++++++ 6 files changed, 127 insertions(+), 7 deletions(-) create mode 100644 elf/tst-align3.c create mode 100644 elf/tst-alignmod3.c -- 2.27.0 ^ permalink raw reply [flat|nested] 42+ messages in thread
* [PATCH v6 1/2] elf: Properly align PT_LOAD segments [BZ #28676] 2021-12-13 2:51 ` [PATCH v6 " Rongwei Wang @ 2021-12-13 2:51 ` Rongwei Wang 2021-12-13 11:05 ` Szabolcs Nagy 2021-12-13 11:46 ` Andreas Schwab 2021-12-13 2:51 ` [PATCH v6 2/2] Add a testcase to check alignment of PT_LOAD segment Rongwei Wang 2021-12-14 2:03 ` [PATCH v6 0/2] fix p_align on PT_LOAD segment in DSO isn't honored Fangrui Song 2 siblings, 2 replies; 42+ messages in thread From: Rongwei Wang @ 2021-12-13 2:51 UTC (permalink / raw) To: libc-alpha, hjl.tools, fweimer, adhemerval.zanella; +Cc: xuyu, gavin.dg When PT_LOAD segment alignment > the page size, allocate enough space to ensure that the segment can be properly aligned. And this change helps code segments use huge pages become simple and available. This fixes [BZ #28676]. Signed-off-by: Xu Yu <xuyu@linux.alibaba.com> Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com> --- elf/dl-load.c | 2 ++ elf/dl-load.h | 3 ++- elf/dl-map-segments.h | 50 +++++++++++++++++++++++++++++++++++++++---- 3 files changed, 50 insertions(+), 5 deletions(-) diff --git a/elf/dl-load.c b/elf/dl-load.c index bf8957e73c..721593135e 100644 --- a/elf/dl-load.c +++ b/elf/dl-load.c @@ -1,5 +1,6 @@ /* Map in a shared object's segments from the file. Copyright (C) 1995-2021 Free Software Foundation, Inc. + Copyright The GNU Toolchain Authors. This file is part of the GNU C Library. The GNU C Library is free software; you can redistribute it and/or @@ -1150,6 +1151,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd, c->mapend = ALIGN_UP (ph->p_vaddr + ph->p_filesz, GLRO(dl_pagesize)); c->dataend = ph->p_vaddr + ph->p_filesz; c->allocend = ph->p_vaddr + ph->p_memsz; + c->mapalign = ph->p_align; c->mapoff = ALIGN_DOWN (ph->p_offset, GLRO(dl_pagesize)); /* Determine whether there is a gap between the last segment diff --git a/elf/dl-load.h b/elf/dl-load.h index e329d49a81..e6dabcb336 100644 --- a/elf/dl-load.h +++ b/elf/dl-load.h @@ -1,5 +1,6 @@ /* Map in a shared object's segments from the file. Copyright (C) 1995-2021 Free Software Foundation, Inc. + Copyright The GNU Toolchain Authors. This file is part of the GNU C Library. The GNU C Library is free software; you can redistribute it and/or @@ -74,7 +75,7 @@ ELF_PREFERRED_ADDRESS_DATA; Its details have been expanded out and converted. */ struct loadcmd { - ElfW(Addr) mapstart, mapend, dataend, allocend; + ElfW(Addr) mapstart, mapend, dataend, allocend, mapalign; ElfW(Off) mapoff; int prot; /* PROT_* bits. */ }; diff --git a/elf/dl-map-segments.h b/elf/dl-map-segments.h index f9fb110ee3..70a4c40695 100644 --- a/elf/dl-map-segments.h +++ b/elf/dl-map-segments.h @@ -1,5 +1,6 @@ /* Map in a shared object's segments. Generic version. Copyright (C) 1995-2021 Free Software Foundation, Inc. + Copyright The GNU Toolchain Authors. This file is part of the GNU C Library. The GNU C Library is free software; you can redistribute it and/or @@ -18,6 +19,50 @@ #include <dl-load.h> +/* Map a segment and align it properly. */ + +static __always_inline ElfW(Addr) +_dl_map_segment (const struct loadcmd *c, ElfW(Addr) mappref, + const size_t maplength, int fd) +{ + if (__glibc_likely (c->mapalign <= GLRO(dl_pagesize))) + return (ElfW(Addr)) __mmap ((void *) mappref, maplength, c->prot, + MAP_COPY|MAP_FILE, fd, c->mapoff); + + /* If the segment alignment > the page size, allocate enough space to + ensure that the segment can be properly aligned. */ + ElfW(Addr) maplen = (maplength >= c->mapalign + ? (maplength + c->mapalign) + : (2 * c->mapalign)); + ElfW(Addr) map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplen, + PROT_NONE, + MAP_ANONYMOUS|MAP_PRIVATE, + -1, 0); + if (__glibc_unlikely ((void *) map_start == MAP_FAILED)) + return map_start; + + ElfW(Addr) map_start_aligned = ALIGN_UP (map_start, c->mapalign); + map_start_aligned = (ElfW(Addr)) __mmap ((void *) map_start_aligned, + maplength, c->prot, + MAP_COPY|MAP_FILE|MAP_FIXED, + fd, c->mapoff); + if (__glibc_unlikely ((void *) map_start_aligned == MAP_FAILED)) + __munmap ((void *) map_start, maplen); + else + { + /* Unmap the unused regions. */ + ElfW(Addr) delta = map_start_aligned - map_start; + if (delta) + __munmap ((void *) map_start, delta); + ElfW(Addr) map_end = map_start_aligned + maplength; + delta = map_start + maplen - map_end; + if (delta) + __munmap ((void *) map_end, delta); + } + + return map_start_aligned; +} + /* This implementation assumes (as does the corresponding implementation of _dl_unmap_segments, in dl-unmap-segments.h) that shared objects are always laid out with all segments contiguous (or with gaps @@ -53,10 +98,7 @@ _dl_map_segments (struct link_map *l, int fd, - MAP_BASE_ADDR (l)); /* Remember which part of the address space this object uses. */ - l->l_map_start = (ElfW(Addr)) __mmap ((void *) mappref, maplength, - c->prot, - MAP_COPY|MAP_FILE, - fd, c->mapoff); + l->l_map_start = _dl_map_segment (c, mappref, maplength, fd); if (__glibc_unlikely ((void *) l->l_map_start == MAP_FAILED)) return DL_MAP_SEGMENTS_ERROR_MAP_SEGMENT; -- 2.27.0 ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v6 1/2] elf: Properly align PT_LOAD segments [BZ #28676] 2021-12-13 2:51 ` [PATCH v6 1/2] elf: Properly align PT_LOAD segments [BZ #28676] Rongwei Wang @ 2021-12-13 11:05 ` Szabolcs Nagy 2021-12-13 11:17 ` Florian Weimer 2021-12-13 11:46 ` Andreas Schwab 1 sibling, 1 reply; 42+ messages in thread From: Szabolcs Nagy @ 2021-12-13 11:05 UTC (permalink / raw) To: Rongwei Wang Cc: libc-alpha, hjl.tools, fweimer, adhemerval.zanella, xuyu, gavin.dg The 12/13/2021 10:51, Rongwei Wang via Libc-alpha wrote: > When PT_LOAD segment alignment > the page size, allocate > enough space to ensure that the segment can be properly > aligned. > > And this change helps code segments use huge pages become > simple and available. > > This fixes [BZ #28676]. > > Signed-off-by: Xu Yu <xuyu@linux.alibaba.com> > Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com> since this commit nptl/tst-stack4 consistently fails in my 32bit arm test environment with pthread_create failed: 11 pthread_create failed: 11 pthread_create failed: 11 pthread_create failed: 11 pthread_create failed: 11 pthread_create failed: 11 pthread_create failed: 11 tst-stack4: tst-stack4.c:69: dso_process: Assertion `handle[dso]' failed. Didn't expect signal from child: got `Aborted' i suspect it simply runs out of memory. if this change pushes memory usage of the test above 2G then i think either the test or the patch have to be changed so 32bit targets can still run it reliably. ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v6 1/2] elf: Properly align PT_LOAD segments [BZ #28676] 2021-12-13 11:05 ` Szabolcs Nagy @ 2021-12-13 11:17 ` Florian Weimer 2021-12-13 11:35 ` Szabolcs Nagy 0 siblings, 1 reply; 42+ messages in thread From: Florian Weimer @ 2021-12-13 11:17 UTC (permalink / raw) To: Szabolcs Nagy Cc: Rongwei Wang, libc-alpha, hjl.tools, adhemerval.zanella, xuyu, gavin.dg * Szabolcs Nagy: > The 12/13/2021 10:51, Rongwei Wang via Libc-alpha wrote: >> When PT_LOAD segment alignment > the page size, allocate >> enough space to ensure that the segment can be properly >> aligned. >> >> And this change helps code segments use huge pages become >> simple and available. >> >> This fixes [BZ #28676]. >> >> Signed-off-by: Xu Yu <xuyu@linux.alibaba.com> >> Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com> > > > since this commit nptl/tst-stack4 consistently > fails in my 32bit arm test environment with > > pthread_create failed: 11 > pthread_create failed: 11 > pthread_create failed: 11 > pthread_create failed: 11 > pthread_create failed: 11 > pthread_create failed: 11 > pthread_create failed: 11 > tst-stack4: tst-stack4.c:69: dso_process: Assertion `handle[dso]' failed. > Didn't expect signal from child: got `Aborted' > > i suspect it simply runs out of memory. > > if this change pushes memory usage of the test > above 2G then i think either the test or the > patch have to be changed so 32bit targets can > still run it reliably. What are the p_align values of the involved objects? I would not expect any changes on 32-bit Arm because p_align and the run-time page size should match there. Thanks, Florian ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v6 1/2] elf: Properly align PT_LOAD segments [BZ #28676] 2021-12-13 11:17 ` Florian Weimer @ 2021-12-13 11:35 ` Szabolcs Nagy 2021-12-13 11:59 ` Florian Weimer 0 siblings, 1 reply; 42+ messages in thread From: Szabolcs Nagy @ 2021-12-13 11:35 UTC (permalink / raw) To: Florian Weimer Cc: Rongwei Wang, libc-alpha, hjl.tools, adhemerval.zanella, xuyu, gavin.dg The 12/13/2021 12:17, Florian Weimer wrote: > * Szabolcs Nagy: > > > The 12/13/2021 10:51, Rongwei Wang via Libc-alpha wrote: > >> When PT_LOAD segment alignment > the page size, allocate > >> enough space to ensure that the segment can be properly > >> aligned. > >> > >> And this change helps code segments use huge pages become > >> simple and available. > >> > >> This fixes [BZ #28676]. > >> > >> Signed-off-by: Xu Yu <xuyu@linux.alibaba.com> > >> Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com> > > > > > > since this commit nptl/tst-stack4 consistently > > fails in my 32bit arm test environment with > > > > pthread_create failed: 11 > > pthread_create failed: 11 > > pthread_create failed: 11 > > pthread_create failed: 11 > > pthread_create failed: 11 > > pthread_create failed: 11 > > pthread_create failed: 11 > > tst-stack4: tst-stack4.c:69: dso_process: Assertion `handle[dso]' failed. > > Didn't expect signal from child: got `Aborted' > > > > i suspect it simply runs out of memory. > > > > if this change pushes memory usage of the test > > above 2G then i think either the test or the > > patch have to be changed so 32bit targets can > > still run it reliably. > > What are the p_align values of the involved objects? I would not expect > any changes on 32-bit Arm because p_align and the run-time page size > should match there. p_align is 64K and i see a lot of close to 64K PROT_NONE mappings left behind after many dlclose which creates a lot of vm fragmentation when dlopen/dlclose is called in a loop. (i think the mapping is at the beginning or end of the lib as some kind of padding and left behind after dlclose, but haven't confirmed this yet) ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v6 1/2] elf: Properly align PT_LOAD segments [BZ #28676] 2021-12-13 11:35 ` Szabolcs Nagy @ 2021-12-13 11:59 ` Florian Weimer 2021-12-13 13:20 ` H.J. Lu 0 siblings, 1 reply; 42+ messages in thread From: Florian Weimer @ 2021-12-13 11:59 UTC (permalink / raw) To: Szabolcs Nagy Cc: Rongwei Wang, libc-alpha, hjl.tools, adhemerval.zanella, xuyu, gavin.dg * Szabolcs Nagy: >> What are the p_align values of the involved objects? I would not expect >> any changes on 32-bit Arm because p_align and the run-time page size >> should match there. > > p_align is 64K > > and i see a lot of close to 64K PROT_NONE mappings > left behind after many dlclose which creates a lot of > vm fragmentation when dlopen/dlclose is called in a loop. > > (i think the mapping is at the beginning or end of > the lib as some kind of padding and left behind after > dlclose, but haven't confirmed this yet) Oh. So why there is a real bug here, I think we need to discuss what the change means for 64K p_align binaries on 4K kernels. Do the additional munmap calls matter for startup performance? (I understand this is very much a correctness fix, but startup performance matters as well.) Thanks, Florian ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v6 1/2] elf: Properly align PT_LOAD segments [BZ #28676] 2021-12-13 11:59 ` Florian Weimer @ 2021-12-13 13:20 ` H.J. Lu 2021-12-13 13:26 ` Florian Weimer 0 siblings, 1 reply; 42+ messages in thread From: H.J. Lu @ 2021-12-13 13:20 UTC (permalink / raw) To: Florian Weimer Cc: Szabolcs Nagy, Rongwei Wang, GNU C Library, Adhemerval Zanella, xuyu, gavin.dg On Mon, Dec 13, 2021 at 3:59 AM Florian Weimer <fweimer@redhat.com> wrote: > > * Szabolcs Nagy: > > >> What are the p_align values of the involved objects? I would not expect > >> any changes on 32-bit Arm because p_align and the run-time page size > >> should match there. > > > > p_align is 64K > > > > and i see a lot of close to 64K PROT_NONE mappings > > left behind after many dlclose which creates a lot of > > vm fragmentation when dlopen/dlclose is called in a loop. > > > > (i think the mapping is at the beginning or end of > > the lib as some kind of padding and left behind after > > dlclose, but haven't confirmed this yet) > > Oh. So why there is a real bug here, I think we need to discuss what > the change means for 64K p_align binaries on 4K kernels. Do the > additional munmap calls matter for startup performance? We should align munmap arguments. > (I understand this is very much a correctness fix, but startup > performance matters as well.) > The kernel loader doesn't call munmap in this case. Should we be concerned about the unused pages? -- H.J. ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v6 1/2] elf: Properly align PT_LOAD segments [BZ #28676] 2021-12-13 13:20 ` H.J. Lu @ 2021-12-13 13:26 ` Florian Weimer 2021-12-13 13:34 ` H.J. Lu 0 siblings, 1 reply; 42+ messages in thread From: Florian Weimer @ 2021-12-13 13:26 UTC (permalink / raw) To: H.J. Lu Cc: Szabolcs Nagy, Rongwei Wang, GNU C Library, Adhemerval Zanella, xuyu, gavin.dg * H. J. Lu: > On Mon, Dec 13, 2021 at 3:59 AM Florian Weimer <fweimer@redhat.com> wrote: >> >> * Szabolcs Nagy: >> >> >> What are the p_align values of the involved objects? I would not expect >> >> any changes on 32-bit Arm because p_align and the run-time page size >> >> should match there. >> > >> > p_align is 64K >> > >> > and i see a lot of close to 64K PROT_NONE mappings >> > left behind after many dlclose which creates a lot of >> > vm fragmentation when dlopen/dlclose is called in a loop. >> > >> > (i think the mapping is at the beginning or end of >> > the lib as some kind of padding and left behind after >> > dlclose, but haven't confirmed this yet) >> >> Oh. So why there is a real bug here, I think we need to discuss what >> the change means for 64K p_align binaries on 4K kernels. Do the >> additional munmap calls matter for startup performance? > > We should align munmap arguments. Sorry, I meant “while there is a real bug here”, that is: even if we fix that, the additional munmap calls could hurt startup performance. >> (I understand this is very much a correctness fix, but startup >> performance matters as well.) >> > > The kernel loader doesn't call munmap in this case. Should > we be concerned about the unused pages? Which approach leads to fewer mappings that count against the map limit (64K by default I believe)? Thanks, Florian ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v6 1/2] elf: Properly align PT_LOAD segments [BZ #28676] 2021-12-13 13:26 ` Florian Weimer @ 2021-12-13 13:34 ` H.J. Lu 0 siblings, 0 replies; 42+ messages in thread From: H.J. Lu @ 2021-12-13 13:34 UTC (permalink / raw) To: Florian Weimer Cc: Szabolcs Nagy, Rongwei Wang, GNU C Library, Adhemerval Zanella, xuyu, gavin.dg On Mon, Dec 13, 2021 at 5:26 AM Florian Weimer <fweimer@redhat.com> wrote: > > * H. J. Lu: > > > On Mon, Dec 13, 2021 at 3:59 AM Florian Weimer <fweimer@redhat.com> wrote: > >> > >> * Szabolcs Nagy: > >> > >> >> What are the p_align values of the involved objects? I would not expect > >> >> any changes on 32-bit Arm because p_align and the run-time page size > >> >> should match there. > >> > > >> > p_align is 64K > >> > > >> > and i see a lot of close to 64K PROT_NONE mappings > >> > left behind after many dlclose which creates a lot of > >> > vm fragmentation when dlopen/dlclose is called in a loop. > >> > > >> > (i think the mapping is at the beginning or end of > >> > the lib as some kind of padding and left behind after > >> > dlclose, but haven't confirmed this yet) > >> > >> Oh. So why there is a real bug here, I think we need to discuss what > >> the change means for 64K p_align binaries on 4K kernels. Do the > >> additional munmap calls matter for startup performance? > > > > We should align munmap arguments. > > Sorry, I meant “while there is a real bug here”, that is: even if we fix > that, the additional munmap calls could hurt startup performance. True. > >> (I understand this is very much a correctness fix, but startup > >> performance matters as well.) > >> > > > > The kernel loader doesn't call munmap in this case. Should > > we be concerned about the unused pages? > > Which approach leads to fewer mappings that count against the map limit > (64K by default I believe)? Without munmap, some pages will be wasted. Should we change the linker to add a new DT_XXX entry to request this new behavior? Linker will set it only when setting p_align > page size due to variable alignment or a command-line option. -- H.J. ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v6 1/2] elf: Properly align PT_LOAD segments [BZ #28676] 2021-12-13 2:51 ` [PATCH v6 1/2] elf: Properly align PT_LOAD segments [BZ #28676] Rongwei Wang 2021-12-13 11:05 ` Szabolcs Nagy @ 2021-12-13 11:46 ` Andreas Schwab 2021-12-13 11:52 ` Szabolcs Nagy 1 sibling, 1 reply; 42+ messages in thread From: Andreas Schwab @ 2021-12-13 11:46 UTC (permalink / raw) To: Rongwei Wang via Libc-alpha Cc: hjl.tools, fweimer, adhemerval.zanella, Rongwei Wang, xuyu, gavin.dg On Dez 13 2021, Rongwei Wang via Libc-alpha wrote: > + else > + { > + /* Unmap the unused regions. */ > + ElfW(Addr) delta = map_start_aligned - map_start; > + if (delta) > + __munmap ((void *) map_start, delta); > + ElfW(Addr) map_end = map_start_aligned + maplength; > + delta = map_start + maplen - map_end; > + if (delta) > + __munmap ((void *) map_end, delta); I don't think map_end is guaranteed to be page-aligned. -- Andreas Schwab, schwab@linux-m68k.org GPG Key fingerprint = 7578 EB47 D4E5 4D69 2510 2552 DF73 E780 A9DA AEC1 "And now for something completely different." ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v6 1/2] elf: Properly align PT_LOAD segments [BZ #28676] 2021-12-13 11:46 ` Andreas Schwab @ 2021-12-13 11:52 ` Szabolcs Nagy 2021-12-13 14:51 ` Rongwei Wang 0 siblings, 1 reply; 42+ messages in thread From: Szabolcs Nagy @ 2021-12-13 11:52 UTC (permalink / raw) To: Andreas Schwab; +Cc: Rongwei Wang via Libc-alpha, fweimer, xuyu, gavin.dg The 12/13/2021 12:46, Andreas Schwab wrote: > On Dez 13 2021, Rongwei Wang via Libc-alpha wrote: > > > + else > > + { > > + /* Unmap the unused regions. */ > > + ElfW(Addr) delta = map_start_aligned - map_start; > > + if (delta) > > + __munmap ((void *) map_start, delta); > > + ElfW(Addr) map_end = map_start_aligned + maplength; > > + delta = map_start + maplen - map_end; > > + if (delta) > > + __munmap ((void *) map_end, delta); > > I don't think map_end is guaranteed to be page-aligned. indeed i see failing munmap syscalls in strace .. 3161105 munmap(0xf7973040, 57344) = -1 EINVAL (Invalid argument) 3161105 munmap(0xf79591d4, 24576) = -1 EINVAL (Invalid argument) 3161107 munmap(0xf6031038, 45056) = -1 EINVAL (Invalid argument) 3161108 munmap(0xf56f1038, 53248) = -1 EINVAL (Invalid argument) ... ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v6 1/2] elf: Properly align PT_LOAD segments [BZ #28676] 2021-12-13 11:52 ` Szabolcs Nagy @ 2021-12-13 14:51 ` Rongwei Wang 2021-12-13 17:37 ` Szabolcs Nagy 0 siblings, 1 reply; 42+ messages in thread From: Rongwei Wang @ 2021-12-13 14:51 UTC (permalink / raw) To: Szabolcs Nagy, Andreas Schwab Cc: fweimer, xuyu, Rongwei Wang via Libc-alpha, gavin.dg On 12/13/21 7:52 PM, Szabolcs Nagy via Libc-alpha wrote: > The 12/13/2021 12:46, Andreas Schwab wrote: >> On Dez 13 2021, Rongwei Wang via Libc-alpha wrote: >> >>> + else >>> + { >>> + /* Unmap the unused regions. */ >>> + ElfW(Addr) delta = map_start_aligned - map_start; >>> + if (delta) >>> + __munmap ((void *) map_start, delta); >>> + ElfW(Addr) map_end = map_start_aligned + maplength; >>> + delta = map_start + maplen - map_end; >>> + if (delta) >>> + __munmap ((void *) map_end, delta); >> >> I don't think map_end is guaranteed to be page-aligned. > > indeed i see failing munmap syscalls in strace Hi, Szabolcs Thanks for your test! I have no arm32 environment, and ignoring this test. It seems the 'map_end' need to be page-aligned before calling munmap. The following code only update the first line to fix this bug: + ElfW(Addr) map_end = ALIGN_UP(map_start_aligned + maplength, GLRO(dl_pagesize)); + delta = map_start + maplen - map_end; + if (delta) + __munmap ((void *) map_end, delta); Can you help me test this new code again if available? Thanks. > > .. > 3161105 munmap(0xf7973040, 57344) = -1 EINVAL (Invalid argument) > 3161105 munmap(0xf79591d4, 24576) = -1 EINVAL (Invalid argument) > 3161107 munmap(0xf6031038, 45056) = -1 EINVAL (Invalid argument) > 3161108 munmap(0xf56f1038, 53248) = -1 EINVAL (Invalid argument) > ... > ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v6 1/2] elf: Properly align PT_LOAD segments [BZ #28676] 2021-12-13 14:51 ` Rongwei Wang @ 2021-12-13 17:37 ` Szabolcs Nagy 2021-12-13 17:50 ` Florian Weimer 0 siblings, 1 reply; 42+ messages in thread From: Szabolcs Nagy @ 2021-12-13 17:37 UTC (permalink / raw) To: Rongwei Wang Cc: Andreas Schwab, fweimer, xuyu, Rongwei Wang via Libc-alpha, gavin.dg The 12/13/2021 22:51, Rongwei Wang wrote: > On 12/13/21 7:52 PM, Szabolcs Nagy via Libc-alpha wrote: > > The 12/13/2021 12:46, Andreas Schwab wrote: > > > On Dez 13 2021, Rongwei Wang via Libc-alpha wrote: > > > > > > > + else > > > > + { > > > > + /* Unmap the unused regions. */ > > > > + ElfW(Addr) delta = map_start_aligned - map_start; > > > > + if (delta) > > > > + __munmap ((void *) map_start, delta); > > > > + ElfW(Addr) map_end = map_start_aligned + maplength; > > > > + delta = map_start + maplen - map_end; > > > > + if (delta) > > > > + __munmap ((void *) map_end, delta); > > > > > > I don't think map_end is guaranteed to be page-aligned. > > > > indeed i see failing munmap syscalls in strace > Hi, Szabolcs > > Thanks for your test! I have no arm32 environment, and ignoring this test. > > It seems the 'map_end' need to be page-aligned before calling munmap. > The following code only update the first line to fix this bug: > > + ElfW(Addr) map_end = ALIGN_UP(map_start_aligned + maplength, > GLRO(dl_pagesize)); > + delta = map_start + maplen - map_end; > + if (delta) > + __munmap ((void *) map_end, delta); > > Can you help me test this new code again if available? yes, the ALIGN_UP works. note that the issue is observable on aarch64 too (with 4k pagesize) and likely x86_64 too, it just does not cause enough vm fragmentation there to run out of memory. you can verify it by using strace -e munmap before and after. thanks. ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v6 1/2] elf: Properly align PT_LOAD segments [BZ #28676] 2021-12-13 17:37 ` Szabolcs Nagy @ 2021-12-13 17:50 ` Florian Weimer 0 siblings, 0 replies; 42+ messages in thread From: Florian Weimer @ 2021-12-13 17:50 UTC (permalink / raw) To: Szabolcs Nagy Cc: Rongwei Wang, Andreas Schwab, xuyu, Rongwei Wang via Libc-alpha, gavin.dg * Szabolcs Nagy: > The 12/13/2021 22:51, Rongwei Wang wrote: >> On 12/13/21 7:52 PM, Szabolcs Nagy via Libc-alpha wrote: >> > The 12/13/2021 12:46, Andreas Schwab wrote: >> > > On Dez 13 2021, Rongwei Wang via Libc-alpha wrote: >> > > >> > > > + else >> > > > + { >> > > > + /* Unmap the unused regions. */ >> > > > + ElfW(Addr) delta = map_start_aligned - map_start; >> > > > + if (delta) >> > > > + __munmap ((void *) map_start, delta); >> > > > + ElfW(Addr) map_end = map_start_aligned + maplength; >> > > > + delta = map_start + maplen - map_end; >> > > > + if (delta) >> > > > + __munmap ((void *) map_end, delta); >> > > >> > > I don't think map_end is guaranteed to be page-aligned. >> > >> > indeed i see failing munmap syscalls in strace >> Hi, Szabolcs >> >> Thanks for your test! I have no arm32 environment, and ignoring this test. >> >> It seems the 'map_end' need to be page-aligned before calling munmap. >> The following code only update the first line to fix this bug: >> >> + ElfW(Addr) map_end = ALIGN_UP(map_start_aligned + maplength, >> GLRO(dl_pagesize)); >> + delta = map_start + maplen - map_end; >> + if (delta) >> + __munmap ((void *) map_end, delta); >> >> Can you help me test this new code again if available? > > yes, the ALIGN_UP works. > > note that the issue is observable on aarch64 too (with 4k pagesize) > and likely x86_64 too, it just does not cause enough vm fragmentation > there to run out of memory. > > you can verify it by using strace -e munmap before and after. We should check munmap failure though and rollback everything if necessary. It's possible we can undo the initial PROT_NONE mapping even if future munmap calls fail because unmapping the first mapping does not need to split a mapping. Thanks, Florian ^ permalink raw reply [flat|nested] 42+ messages in thread
* [PATCH v6 2/2] Add a testcase to check alignment of PT_LOAD segment 2021-12-13 2:51 ` [PATCH v6 " Rongwei Wang 2021-12-13 2:51 ` [PATCH v6 1/2] elf: Properly align PT_LOAD segments [BZ #28676] Rongwei Wang @ 2021-12-13 2:51 ` Rongwei Wang 2021-12-14 2:03 ` [PATCH v6 0/2] fix p_align on PT_LOAD segment in DSO isn't honored Fangrui Song 2 siblings, 0 replies; 42+ messages in thread From: Rongwei Wang @ 2021-12-13 2:51 UTC (permalink / raw) To: libc-alpha, hjl.tools, fweimer, adhemerval.zanella; +Cc: xuyu, gavin.dg From: "H.J. Lu" <hjl.tools@gmail.com> This patch adds a testcase for PT_LOAD segment to check it is properly aligned when the alignment > the page size. Signed-off-by: "H.J. Lu" <hjl.tools@gmail.com> Signed-off-by: Rongwei Wang <rongwei.wang@linux.alibaba.com> --- elf/Makefile | 14 ++++++++++++-- elf/tst-align3.c | 38 ++++++++++++++++++++++++++++++++++++++ elf/tst-alignmod3.c | 32 ++++++++++++++++++++++++++++++++ 3 files changed, 82 insertions(+), 2 deletions(-) create mode 100644 elf/tst-align3.c create mode 100644 elf/tst-alignmod3.c diff --git a/elf/Makefile b/elf/Makefile index ef36008673..b16128ac8b 100644 --- a/elf/Makefile +++ b/elf/Makefile @@ -207,7 +207,7 @@ tests += restest1 preloadtest loadfail multiload origtest resolvfail \ tst-tls4 tst-tls5 \ tst-tls10 tst-tls11 tst-tls12 tst-tls13 tst-tls14 tst-tls15 \ tst-tls16 tst-tls17 tst-tls18 tst-tls19 tst-tls-dlinfo \ - tst-align tst-align2 \ + tst-align tst-align2 tst-align3 \ tst-dlmodcount tst-dlopenrpath tst-deep1 \ tst-dlmopen1 tst-dlmopen3 tst-dlmopen4 \ unload3 unload4 unload5 unload6 unload7 unload8 tst-global1 order2 \ @@ -241,6 +241,9 @@ tests-internal += loadtest unload unload2 circleload1 \ tests-container += tst-pldd tst-dlopen-tlsmodid-container \ tst-dlopen-self-container tst-preload-pthread-libc test-srcs = tst-pathopt +ifeq (yes,$(have-fpie)) +tests-pie += tst-align3 +endif selinux-enabled := $(shell cat /selinux/enforce 2> /dev/null) ifneq ($(selinux-enabled),1) tests-execstack-yes = tst-execstack tst-execstack-needed tst-execstack-prog @@ -302,7 +305,7 @@ modules-names = testobj1 testobj2 testobj3 testobj4 testobj5 testobj6 \ circlemod3 circlemod3a \ reldep8mod1 reldep8mod2 reldep8mod3 \ reldep9mod1 reldep9mod2 reldep9mod3 \ - tst-alignmod tst-alignmod2 \ + tst-alignmod tst-alignmod2 tst-alignmod3 \ $(modules-execstack-$(have-z-execstack)) \ tst-dlopenrpathmod tst-deep1mod1 tst-deep1mod2 tst-deep1mod3 \ tst-dlmopen1mod tst-auditmod1 \ @@ -1088,6 +1091,13 @@ CFLAGS-tst-alignmod.c += $(stack-align-test-flags) CFLAGS-tst-alignmod2.c += $(stack-align-test-flags) $(objpfx)tst-align.out: $(objpfx)tst-alignmod.so $(objpfx)tst-align2: $(objpfx)tst-alignmod2.so +$(objpfx)tst-align3: $(objpfx)tst-alignmod3.so +ifeq (yes,$(have-fpie)) +CFLAGS-tst-align3.c += $(PIE-ccflag) +endif +LDFLAGS-tst-align3 += -Wl,-z,max-page-size=0x200000 +LDFLAGS-tst-alignmod3.so += -Wl,-z,max-page-size=0x200000 +$(objpfx)tst-alignmod3.so: $(libsupport) $(objpfx)unload3.out: $(objpfx)unload3mod1.so $(objpfx)unload3mod2.so \ $(objpfx)unload3mod3.so $(objpfx)unload3mod4.so diff --git a/elf/tst-align3.c b/elf/tst-align3.c new file mode 100644 index 0000000000..fab6294a09 --- /dev/null +++ b/elf/tst-align3.c @@ -0,0 +1,38 @@ +/* Check alignment of PT_LOAD segment in a shared library. + Copyright (C) 2021 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + <https://www.gnu.org/licenses/>. */ + +#include <support/check.h> +#include <tst-stack-align.h> + +/* This should cover all possible page sizes we currently support. */ +#define ALIGN 0x200000 + +int bar __attribute__ ((aligned (ALIGN))) = 1; + +extern int do_load_test (void); + +static int +do_test (void) +{ + printf ("bar: %p\n", &bar); + TEST_VERIFY (is_aligned (&bar, ALIGN) == 0); + + return do_load_test (); +} + +#include <support/test-driver.c> diff --git a/elf/tst-alignmod3.c b/elf/tst-alignmod3.c new file mode 100644 index 0000000000..f2627626f4 --- /dev/null +++ b/elf/tst-alignmod3.c @@ -0,0 +1,32 @@ +/* Check alignment of PT_LOAD segment in a shared library. + Copyright (C) 2021 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + <https://www.gnu.org/licenses/>. */ + +#include <support/check.h> +#include <tst-stack-align.h> + +/* This should cover all possible page sizes we currently support. */ +#define ALIGN 0x200000 + +int foo __attribute__ ((aligned (ALIGN))) = 1; + +void +do_load_test (void) +{ + printf ("foo: %p\n", &foo); + TEST_VERIFY (is_aligned (&foo, ALIGN) == 0); +} -- 2.27.0 ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v6 0/2] fix p_align on PT_LOAD segment in DSO isn't honored 2021-12-13 2:51 ` [PATCH v6 " Rongwei Wang 2021-12-13 2:51 ` [PATCH v6 1/2] elf: Properly align PT_LOAD segments [BZ #28676] Rongwei Wang 2021-12-13 2:51 ` [PATCH v6 2/2] Add a testcase to check alignment of PT_LOAD segment Rongwei Wang @ 2021-12-14 2:03 ` Fangrui Song 2021-12-14 3:56 ` H.J. Lu 2 siblings, 1 reply; 42+ messages in thread From: Fangrui Song @ 2021-12-14 2:03 UTC (permalink / raw) To: Rongwei Wang Cc: libc-alpha, hjl.tools, fweimer, adhemerval.zanella, xuyu, gavin.dg, Chris Kennelly On 2021-12-13, Rongwei Wang via Libc-alpha wrote: >Hi > >This patch mainly to fix a reported bug: > >"p_align on PT_LOAD segment in DSO isn't honored" >https://sourceware.org/bugzilla/show_bug.cgi?id=28676 (From linekr perspective) I am unsure this is a bug. The generic-abi just says: > p_align > > As ``Program Loading'' describes in this chapter of the processor > supplement, loadable process segments must have congruent values for > p_vaddr and p_offset, modulo the page size. This member gives the value > to which the segments are aligned in memory and in the file. Values 0 > and 1 mean no alignment is required. Otherwise, p_align should be a > positive, integral power of 2, and p_vaddr should equal p_offset, modulo > p_align. The requirement is p_offset = p_vaddr (mod p_align). It does not necessarily imply that the system has to make p_vaddr = real_vaddr (mod p_align). Linkers (GNU ld, gold, ld.lld) set p_align(PT_LOAD) to the CONSTANT(MAXPAGESIZE) (set by -z max-page-size=) value. This is just the largest page size the linked object supports. (The current behavior (including many many ld.so implementations) is `p_vaddr = real_vaddr (mod page_size)`). I guess this reasoning may be related to why the linker option is called max-page-size, not just page-size. My linker oriented stance may be strengthened by the existence of CONSTANT(COMMONPAGESIZE), which is used by PT_GNU_RELRO and is allowed to be smaller than max-page-size: if ld.so always overaligns to p_align, there would be no need to have COMMONPAGESIZE/MAXPAGESIZE distinction. --- I understand that letting ld.so use a large p_align value may make transparent hugepage easy, and may have performance boost for some large executables by some corporate users, but have you considered the downside of always using p_align? How can an user opt out the changed behavior? I think there are many tunable knobs and userspace remapping the pages may have some benefits over ld.so doing it automatically. * At the very least, I can think that people may want to treat RX and RW memory mappings differently, or call mlock() in some circumstances. * If I set max-page-size to 1GB, am I disallowed to use 2M hugepagesize? * Can a user express intention like mlock? * What if a user doesn't want to place some cold code in hugepages? OK, I don't know hugepages well. CC Chris Kennelly as an expert in this area. >Patch 1/1 is a simple testcase which modified from H.J.Lu. > >Thanks. > >Changelog: >v5 -> v6 >- Patch "Add a testcase to check alignment of PT_LOAD segment" >add some comments >- Patch "elf: Properly align PT_LOAD segments" >update copyright > >v4 -> v5 >- Patch "Add a testcase to check alignment of PT_LOAD segment" >add new testcase for PT_LOAD segment >- Patch "elf: Properly align PT_LOAD segments" >fix map_start to use map_start_aligned when second mmap failed > >v3 -> v4 >- Patch "elf: Properly align PT_LOAD segments" >Call unmap when the second mmap fails. > >v2 -> v3 >- Patch "elf: Properly align PT_LOAD segments" >move mapalign into 'struct loadcmd' >fix some coding style > >RFC/v1 -> v2 > >- Patch "elf: align the mapping address of LOAD segments with p_align" >fix coding format and add testcase in commit. > >RFC link: >https://patchwork.sourceware.org/project/glibc/patch/20211204045848.71105-2-rongwei.wang@linux.alibaba.com/ > >H.J. Lu (1): > Add a testcase to check alignment of PT_LOAD segment > >Rongwei Wang (1): > elf: Properly align PT_LOAD segments > > elf/Makefile | 14 +++++++++++-- > elf/dl-load.c | 1 + > elf/dl-load.h | 2 +- > elf/dl-map-segments.h | 49 +++++++++++++++++++++++++++++++++++++++---- > elf/tst-align3.c | 37 ++++++++++++++++++++++++++++++++ > elf/tst-alignmod3.c | 31 +++++++++++++++++++++++++++ > 6 files changed, 127 insertions(+), 7 deletions(-) > create mode 100644 elf/tst-align3.c > create mode 100644 elf/tst-alignmod3.c > >-- >2.27.0 > ^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: [PATCH v6 0/2] fix p_align on PT_LOAD segment in DSO isn't honored 2021-12-14 2:03 ` [PATCH v6 0/2] fix p_align on PT_LOAD segment in DSO isn't honored Fangrui Song @ 2021-12-14 3:56 ` H.J. Lu 0 siblings, 0 replies; 42+ messages in thread From: H.J. Lu @ 2021-12-14 3:56 UTC (permalink / raw) To: Fangrui Song Cc: Rongwei Wang, GNU C Library, Florian Weimer, Adhemerval Zanella, xuyu, gavin.dg, Chris Kennelly On Mon, Dec 13, 2021 at 6:03 PM Fangrui Song <maskray@google.com> wrote: > > On 2021-12-13, Rongwei Wang via Libc-alpha wrote: > >Hi > > > >This patch mainly to fix a reported bug: > > > >"p_align on PT_LOAD segment in DSO isn't honored" > >https://sourceware.org/bugzilla/show_bug.cgi?id=28676 > > (From linekr perspective) I am unsure this is a bug. > > The generic-abi just says: > > > p_align > > > > As ``Program Loading'' describes in this chapter of the processor > > supplement, loadable process segments must have congruent values for > > p_vaddr and p_offset, modulo the page size. This member gives the value > > to which the segments are aligned in memory and in the file. Values 0 > > and 1 mean no alignment is required. Otherwise, p_align should be a > > positive, integral power of 2, and p_vaddr should equal p_offset, modulo > > p_align. > > The requirement is p_offset = p_vaddr (mod p_align). > It does not necessarily imply that the system has to make p_vaddr = > real_vaddr (mod p_align). > > Linkers (GNU ld, gold, ld.lld) set p_align(PT_LOAD) to the > CONSTANT(MAXPAGESIZE) (set by -z max-page-size=) value. This is just > the largest page size the linked object supports. > (The current behavior (including many many ld.so implementations) is `p_vaddr = real_vaddr (mod page_size)`). > > I guess this reasoning may be related to why the linker option is called > max-page-size, not just page-size. > My linker oriented stance may be strengthened by the existence of > CONSTANT(COMMONPAGESIZE), which is used by PT_GNU_RELRO and is allowed > to be smaller than max-page-size: if ld.so always overaligns to p_align, > there would be no need to have COMMONPAGESIZE/MAXPAGESIZE distinction. > > --- > > I understand that letting ld.so use a large p_align value may make > transparent hugepage easy, and may have performance boost for some large > executables by some corporate users, but have you considered the > downside of always using p_align? How can an user opt out the changed > behavior? I think there are many tunable knobs and userspace remapping > the pages may have some benefits over ld.so doing it automatically. Kernel has been doing this since: commit ce81bb256a224259ab686742a6284930cbe4f1fa Author: Chris Kennelly <ckennelly@google.com> Date: Thu Oct 15 20:12:32 2020 -0700 fs/binfmt_elf: use PT_LOAD p_align values for suitable start address Here is the linker proposal how to opt it out: https://sourceware.org/bugzilla/show_bug.cgi?id=28689 by setting p_align to common page size by default. > * At the very least, I can think that people may want to treat RX and RW > memory mappings differently, or call mlock() in some circumstances. > * If I set max-page-size to 1GB, am I disallowed to use 2M hugepagesize? > * Can a user express intention like mlock? > * What if a user doesn't want to place some cold code in hugepages? > > OK, I don't know hugepages well. CC Chris Kennelly as an expert in this > area. > > >Patch 1/1 is a simple testcase which modified from H.J.Lu. > > > >Thanks. > > > >Changelog: > >v5 -> v6 > >- Patch "Add a testcase to check alignment of PT_LOAD segment" > >add some comments > >- Patch "elf: Properly align PT_LOAD segments" > >update copyright > > > >v4 -> v5 > >- Patch "Add a testcase to check alignment of PT_LOAD segment" > >add new testcase for PT_LOAD segment > >- Patch "elf: Properly align PT_LOAD segments" > >fix map_start to use map_start_aligned when second mmap failed > > > >v3 -> v4 > >- Patch "elf: Properly align PT_LOAD segments" > >Call unmap when the second mmap fails. > > > >v2 -> v3 > >- Patch "elf: Properly align PT_LOAD segments" > >move mapalign into 'struct loadcmd' > >fix some coding style > > > >RFC/v1 -> v2 > > > >- Patch "elf: align the mapping address of LOAD segments with p_align" > >fix coding format and add testcase in commit. > > > >RFC link: > >https://patchwork.sourceware.org/project/glibc/patch/20211204045848.71105-2-rongwei.wang@linux.alibaba.com/ > > > >H.J. Lu (1): > > Add a testcase to check alignment of PT_LOAD segment > > > >Rongwei Wang (1): > > elf: Properly align PT_LOAD segments > > > > elf/Makefile | 14 +++++++++++-- > > elf/dl-load.c | 1 + > > elf/dl-load.h | 2 +- > > elf/dl-map-segments.h | 49 +++++++++++++++++++++++++++++++++++++++---- > > elf/tst-align3.c | 37 ++++++++++++++++++++++++++++++++ > > elf/tst-alignmod3.c | 31 +++++++++++++++++++++++++++ > > 6 files changed, 127 insertions(+), 7 deletions(-) > > create mode 100644 elf/tst-align3.c > > create mode 100644 elf/tst-alignmod3.c > > > >-- > >2.27.0 > > -- H.J. ^ permalink raw reply [flat|nested] 42+ messages in thread
end of thread, other threads:[~2021-12-14 3:57 UTC | newest] Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2021-12-04 4:58 [PATCH RFC 0/1] make ld.so map .text LOAD ssegments and aligned by p_align Rongwei Wang 2021-12-04 4:58 ` [PATCH RFC 1/1] elf: align the mapping address of LOAD segments with p_align Rongwei Wang 2021-12-04 18:10 ` Florian Weimer 2021-12-06 2:47 ` Rongwei Wang 2021-12-06 14:48 ` H.J. Lu 2021-12-08 2:14 ` Rongwei Wang 2021-12-08 2:33 ` H.J. Lu 2021-12-08 3:04 ` Rongwei Wang 2021-12-08 23:52 ` H.J. Lu 2021-12-09 1:43 ` Rongwei Wang 2021-12-10 12:39 ` [PATCH v5 0/2] fix p_align on PT_LOAD segment in DSO isn't honored Rongwei Wang 2021-12-10 12:39 ` [PATCH v5 1/2] elf: Properly align PT_LOAD segments Rongwei Wang 2021-12-10 15:43 ` H.J. Lu 2021-12-10 15:45 ` Florian Weimer 2021-12-10 18:54 ` H.J. Lu 2021-12-10 18:57 ` H.J. Lu 2021-12-10 12:39 ` [PATCH v5 2/2] Add a testcase to check alignment of PT_LOAD segment Rongwei Wang 2021-12-10 13:48 ` Adhemerval Zanella 2021-12-10 15:41 ` H.J. Lu 2021-12-10 18:56 ` H.J. Lu 2021-12-10 20:05 ` Adhemerval Zanella 2021-12-10 20:24 ` H.J. Lu 2021-12-10 21:34 ` Adhemerval Zanella 2021-12-10 13:13 ` [PATCH v5 0/2] fix p_align on PT_LOAD segment in DSO isn't honored H.J. Lu 2021-12-10 13:58 ` Rongwei Wang 2021-12-13 2:51 ` [PATCH v6 " Rongwei Wang 2021-12-13 2:51 ` [PATCH v6 1/2] elf: Properly align PT_LOAD segments [BZ #28676] Rongwei Wang 2021-12-13 11:05 ` Szabolcs Nagy 2021-12-13 11:17 ` Florian Weimer 2021-12-13 11:35 ` Szabolcs Nagy 2021-12-13 11:59 ` Florian Weimer 2021-12-13 13:20 ` H.J. Lu 2021-12-13 13:26 ` Florian Weimer 2021-12-13 13:34 ` H.J. Lu 2021-12-13 11:46 ` Andreas Schwab 2021-12-13 11:52 ` Szabolcs Nagy 2021-12-13 14:51 ` Rongwei Wang 2021-12-13 17:37 ` Szabolcs Nagy 2021-12-13 17:50 ` Florian Weimer 2021-12-13 2:51 ` [PATCH v6 2/2] Add a testcase to check alignment of PT_LOAD segment Rongwei Wang 2021-12-14 2:03 ` [PATCH v6 0/2] fix p_align on PT_LOAD segment in DSO isn't honored Fangrui Song 2021-12-14 3:56 ` H.J. Lu
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).