From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by sourceware.org (Postfix) with ESMTPS id 21C923858402 for ; Sat, 29 Jan 2022 16:46:15 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 21C923858402 Received: by mail-pj1-x102c.google.com with SMTP id r59so9480557pjg.4 for ; Sat, 29 Jan 2022 08:46:15 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=XkUdLa4w/vYlSdaREXyWdVEx1wji5fT6tI9OVwrOurc=; b=oSSWEhkJopAi1/Blr2cCrOtJ9JmVw8vqG7lLACU/8KZhKzJbCXVi4/obu+PZ6jaL3d wj7UXe5QNn84h+pkq4fwqDGAW0vMu9KyiPIsXgRMkCamw7rYchK3VuPymFGOiXOeWk6o 2hJg1HLJkISueeExD0oo9zrdoRpdsqdpcoC1/8i1p4aZnR1XZryf7U6IyHpgn/TKD4Vy k4GEKIdpOitnYalsjddW9PIHtymGcOV1e8nOXJ8przeyUxEpB0CodZTt9rBC72wUkUgY Vrv5eMXop30fjSWflT2dxeh9cLyy9vivpF0XJ3REfvDqE8M61q9dZXCX927XoTqdTj6y fLTg== X-Gm-Message-State: AOAM530i7T23xGSf+bDDG79fAPLm4np/wxi8m/+5S9DQc8yxPJZG+beo 1QUqcdnriz01c/Vbct5lyhTqHR1/eXho6NMM+iS6dw1L3RE= X-Google-Smtp-Source: ABdhPJxX7MNU9LTmlYlGFf+FEkT6ij7y7NlkgNbA3QQZNRvOtxzeJOY+qQ1s3Op4PyO16Ge0evDnpS2HkeMNfbQhM+Y= X-Received: by 2002:a17:902:a708:: with SMTP id w8mr13926107plq.101.1643474773978; Sat, 29 Jan 2022 08:46:13 -0800 (PST) MIME-Version: 1.0 References: <5448901b-167d-4423-c99e-557d8178e56d@redhat.com> <63e7ccbb-a69e-f38e-6e36-778b5fd39ad5@redhat.com> In-Reply-To: From: "H.J. Lu" Date: Sat, 29 Jan 2022 08:45:38 -0800 Message-ID: Subject: Re: [PATCH v4] ld: Rewrite lang_size_relro_segment_1 To: Alan Modra Cc: Nick Clifton , Binutils Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-3026.8 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, KAM_NUMSUBJECT, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: binutils@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Binutils mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 29 Jan 2022 16:46:17 -0000 On Fri, Jan 28, 2022 at 5:01 PM Alan Modra wrote: > > On Wed, Jan 26, 2022 at 06:10:28PM -0800, H.J. Lu wrote: > > On Wed, Jan 26, 2022 at 4:48 PM Alan Modra wrote: > > > > > > On Wed, Jan 26, 2022 at 10:55:35AM +0000, Nick Clifton via Binutils wrote: > > > > Hi H.J. > > > > > > > > > Here is the v4 patch to align the PT_GNU_RELRO segment first and subtract > > > > > the maximum page size if therer is still a 1-page gap. This fixes: > > > > > > > > > > https://sourceware.org/bugzilla/show_bug.cgi?id=28819 > > > > > > > > Looks good to me. Please apply. Branch and mainline. > > > > > > How was this patch tested? If it has not been tested on targets other > > > than x86 then it is not appropriate for the branch. Instead, I > > > > What kinds of tests are you looking for? Do you have a testcase to show > > there is still an issue? > > Can you answer my question? My initial patch passes all tests in binutils, including cross binutils. But tests didn't catch all the cases. For the second patch, I built glibc for Linux targets. Only ia64 glibc doesn't have GNU_RELRO since it doesn't support GNU_RELRO. > > > believe commit 2f83249c13d ought to be reverted on the branch. A > > > wasted page in memory layout is not an important problem. We have > > > more serious relro issues on targets other than x86. See pr28824. > > > > > > I'm speaking up because I raised concerns over the design of the > > > previous patch, not just the implementation, and had those concerns > > > overridden by Nick. Or possibly Nick didn't see my email before the > > > previous patch was approved. Regardless of how it happened, I fear > > > we might have even worse relro support in 2.38. > > > > > > FWIW, I also think that any fix I might develop for pr28824 will > > > likely not be release branch quality until is has been tested for some > > > time on mainline. > > This is the patch I'm playing with. It reverts your changes to > lang_size_relro_segment_1 and fixes PR2844. > > The end of the PT_GNU_RELRO segment must be aligned to maxpagesize in > order for the last page in the segment to be write protected after > relocation. Unfortunately we currently only align to commonpagesize, > which works for x86 but no other architecture that actually uses > maxpagesize memory pages. I've kept the commonpage alignment for x86, > other targets get maxpagesize. x86 should also get maxpagesize. commonpagesize should be only used to align the beginning of the PT_GNU_RELRO segment, not the end of it. Since the PT_LOAD segment is aligned to maxpagesize, it should be OK to use commonpagesize to align the PT_GNU_RELRO segment which is inside of the PT_LOAD segment. But there is no reason to have a 1-page gap before the PT_GNU_RELRO segment. > There are of course testsuite changes that are needed, multiple x86 > and one s390x test. The only fail of any significance is pr18176, but > there is a fundamental tension between pr18176 and pr28743. You > either create holes or you waste disk space, both conditions can't be > met. Worse IMO, the pr18176 testcase not only has a hole but also a > gap in memory layout after the last section in the relro segment. > That's not ideal for powerpc where sections after .got (the last relro > section) are in some cases accessed via the got/toc pointer reg. On > x86 you'd see a gap between .got and .got.plt when SEPARATE_GOTPLT. > > diff --git a/bfd/elfxx-x86.c b/bfd/elfxx-x86.c > index da8a488db36..c9b3e7c4d9f 100644 > --- a/bfd/elfxx-x86.c > +++ b/bfd/elfxx-x86.c > @@ -4205,4 +4205,5 @@ _bfd_elf_linker_x86_set_options (struct bfd_link_info * info, > = elf_x86_hash_table (info, bed->target_id); > if (htab != NULL) > htab->params = params; > + info->relro_use_commonpagesize = true; > } > diff --git a/include/bfdlink.h b/include/bfdlink.h > index 69fc9d33ff4..8d5561b0b28 100644 > --- a/include/bfdlink.h > +++ b/include/bfdlink.h > @@ -413,6 +413,10 @@ struct bfd_link_info > /* TRUE if PT_GNU_RELRO segment should be created. */ > unsigned int relro: 1; > > + /* TRUE if the relro segment should be aligned to commonpagesize > + rather than maxpagesize. */ > + unsigned int relro_use_commonpagesize : 1; > + > /* TRUE if DT_RELR should be enabled for compact relative > relocations. */ > unsigned int enable_dt_relr: 1; > diff --git a/ld/ldexp.c b/ld/ldexp.c > index 5f904aaf8ae..65618912043 100644 > --- a/ld/ldexp.c > +++ b/ld/ldexp.c > @@ -469,7 +469,8 @@ fold_segment_align (seg_align_type *seg, etree_value_type *lhs) > } > else > { > - expld.result.value += expld.dot & (maxpage - 1); > + if (!link_info.relro) > + expld.result.value += expld.dot & (maxpage - 1); > if (seg->phase == exp_seg_done) > { > /* OK. */ > @@ -480,6 +481,10 @@ fold_segment_align (seg_align_type *seg, etree_value_type *lhs) > seg->base = expld.result.value; > seg->pagesize = commonpage; > seg->maxpagesize = maxpage; > + if (link_info.relro_use_commonpagesize) > + seg->relropagesize = commonpage; > + else > + seg->relropagesize = maxpage; > seg->relro_end = 0; > } > else > @@ -508,10 +513,10 @@ fold_segment_relro_end (seg_align_type *seg, etree_value_type *lhs) > seg->relro_end = lhs->value + expld.result.value; > > if (seg->phase == exp_seg_relro_adjust > - && (seg->relro_end & (seg->pagesize - 1))) > + && (seg->relro_end & (seg->relropagesize - 1))) > { > - seg->relro_end += seg->pagesize - 1; > - seg->relro_end &= ~(seg->pagesize - 1); > + seg->relro_end += seg->relropagesize - 1; > + seg->relro_end &= ~(seg->relropagesize - 1); > expld.result.value = seg->relro_end - expld.result.value; > } > else > diff --git a/ld/ldexp.h b/ld/ldexp.h > index ac4fa7e82b0..f1701b66330 100644 > --- a/ld/ldexp.h > +++ b/ld/ldexp.h > @@ -136,7 +136,8 @@ enum relro_enum { > typedef struct { > enum phase_enum phase; > > - bfd_vma base, relro_offset, relro_end, end, pagesize, maxpagesize; > + bfd_vma base, relro_offset, relro_end, end; > + bfd_vma pagesize, maxpagesize, relropagesize; > > enum relro_enum relro; > > diff --git a/ld/ldlang.c b/ld/ldlang.c > index 5dd3df12a0f..1a09b988fbe 100644 > --- a/ld/ldlang.c > +++ b/ld/ldlang.c > @@ -6370,101 +6370,40 @@ lang_size_segment (seg_align_type *seg) > static bfd_vma > lang_size_relro_segment_1 (seg_align_type *seg) > { > - bfd_vma relro_end, desired_relro_base; > - asection *sec, *relro_sec = NULL; > - unsigned int max_alignment_power = 0; > - bool seen_reloc_section = false; > - bool desired_relro_base_reduced = false; > + bfd_vma relro_end, desired_end; > + asection *sec; > > /* Compute the expected PT_GNU_RELRO/PT_LOAD segment end. */ > - relro_end = ((seg->relro_end + seg->pagesize - 1) > - & ~(seg->pagesize - 1)); > + relro_end = (seg->relro_end + seg->relropagesize - 1) & -seg->relropagesize; > > /* Adjust by the offset arg of XXX_SEGMENT_RELRO_END. */ > - desired_relro_base = relro_end - seg->relro_offset; > + desired_end = relro_end - seg->relro_offset; > > - /* For sections in the relro segment. */ > + /* For sections in the relro segment.. */ > for (sec = link_info.output_bfd->section_last; sec; sec = sec->prev) > - if ((sec->flags & SEC_ALLOC) != 0) > + if ((sec->flags & SEC_ALLOC) != 0 > + && sec->vma >= seg->base > + && sec->vma < seg->relro_end - seg->relro_offset) > { > - /* Record the maximum alignment for all sections starting from > - the relro segment. */ > - if (sec->alignment_power > max_alignment_power) > - max_alignment_power = sec->alignment_power; > - > - if (sec->vma >= seg->base > - && sec->vma < seg->relro_end - seg->relro_offset) > - { > - /* Where do we want to put the relro section so that the > - relro segment ends on the page bounary? */ > - bfd_vma start, end, bump; > - > - end = start = sec->vma; > - if (!IS_TBSS (sec)) > - end += TO_ADDR (sec->size); > - bump = desired_relro_base - end; > - /* We'd like to increase START by BUMP, but we must heed > - alignment so the increase might be less than optimum. */ > - start += bump; > - start &= ~(((bfd_vma) 1 << sec->alignment_power) - 1); > - /* This is now the desired end for the previous section. */ > - desired_relro_base = start; > - relro_sec = sec; > - seen_reloc_section = true; > - } > - else if (seen_reloc_section) > - { > - /* Stop searching if we see a non-relro section after seeing > - relro sections. */ > - break; > - } > + /* Where do we want to put this section so that it ends as > + desired? */ > + bfd_vma start, end, bump; > + > + end = start = sec->vma; > + if (!IS_TBSS (sec)) > + end += TO_ADDR (sec->size); > + bump = desired_end - end; > + /* We'd like to increase START by BUMP, but we must heed > + alignment so the increase might be less than optimum. */ > + start += bump; > + start &= ~(((bfd_vma) 1 << sec->alignment_power) - 1); > + /* This is now the desired end for the previous section. */ > + desired_end = start; > } > > - if (relro_sec != NULL > - && seg->maxpagesize >= (1U << max_alignment_power)) > - { > - asection *prev_sec; > - bfd_vma prev_sec_end_plus_1_page; > - > - /* Find the first preceding load section. */ > - for (prev_sec = relro_sec->prev; > - prev_sec != NULL; > - prev_sec = prev_sec->prev) > - if ((prev_sec->flags & SEC_ALLOC) != 0) > - break; > - > - prev_sec_end_plus_1_page = (prev_sec->vma + prev_sec->size > - + seg->maxpagesize); > - if (prev_sec_end_plus_1_page < desired_relro_base) > - { > - bfd_vma aligned_relro_base; > - > - desired_relro_base_reduced = true; > - > - /* Don't add the 1-page gap before the relro segment. Align > - the relro segment first. */ > - aligned_relro_base = (desired_relro_base > - & ~(seg->maxpagesize - 1)); > - if (prev_sec_end_plus_1_page < aligned_relro_base) > - { > - /* Subtract the maximum page size if therer is still a > - 1-page gap. */ > - desired_relro_base -= seg->maxpagesize; > - relro_end -= seg->maxpagesize; > - } > - else > - { > - /* Align the relro segment. */ > - desired_relro_base = aligned_relro_base; > - relro_end &= ~(seg->maxpagesize - 1); > - } > - } > - } > - > seg->phase = exp_seg_relro_adjust; > - ASSERT (desired_relro_base_reduced > - || desired_relro_base >= seg->base); > - seg->base = desired_relro_base; > + ASSERT (desired_end >= seg->base); > + seg->base = desired_end; > return relro_end; > } > > > -- > Alan Modra > Australia Development Lab, IBM -- H.J.