From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by sourceware.org (Postfix) with ESMTP id EFCCF385743B for ; Wed, 7 Jul 2021 10:17:34 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org EFCCF385743B Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-325-4DVKK721P_CrZtnSDn8JfQ-1; Wed, 07 Jul 2021 06:17:33 -0400 X-MC-Unique: 4DVKK721P_CrZtnSDn8JfQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5951E802C91; Wed, 7 Jul 2021 10:17:32 +0000 (UTC) Received: from oldenburg.str.redhat.com (ovpn-115-5.ams2.redhat.com [10.36.115.5]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 7E75560BD9; Wed, 7 Jul 2021 10:17:31 +0000 (UTC) From: Florian Weimer To: Adhemerval Zanella via Libc-alpha Subject: Re: [PATCH v7 1/4] support: Add support_stack_alloc References: <20210706145839.1658623-1-adhemerval.zanella@linaro.org> <20210706145839.1658623-2-adhemerval.zanella@linaro.org> Date: Wed, 07 Jul 2021 12:17:29 +0200 In-Reply-To: <20210706145839.1658623-2-adhemerval.zanella@linaro.org> (Adhemerval Zanella via Libc-alpha's message of "Tue, 6 Jul 2021 11:58:36 -0300") Message-ID: <87k0m2a0na.fsf@oldenburg.str.redhat.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.2 (gnu/linux) MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain X-Spam-Status: No, score=-7.3 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H4, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 07 Jul 2021 10:17:36 -0000 * Adhemerval Zanella via Libc-alpha: > The code to allocate a stack from xsigstack is refactored so it can > be more generic. The new support_stack_alloc() also set PROT_EXEC > if DEFAULT_STACK_PERMS has PF_X. This is required on some > architectures (hppa for instance) and trying to access the rtld > global from testsuite will require more intrusive refactoring > in the ldsodefs.h header. DEFAULT_STACK_PERMS is misnamed, it's really HISTORIC_STACK_PERMS. All architectures override it to RW permissions in the toolchain (maybe with the exception of Hurd, which uses trampolines for nested functions). I have a cstack_allocate version that handles this. It can only be done from within glibc proper because we do not export the stack execution status directly. But I think it's out of scope for glibc 2.34 by now. > + /* The guard bands need to be large enough to intercept offset > + accesses from a stack address that might otherwise hit another > + mapping. Make them at least twice as big as the stack itself, to > + defend against an offset by the entire size of a large > + stack-allocated array. The minimum is 1MiB, which is arbitrarily > + chosen to be larger than any "typical" wild pointer offset. > + Again, no matter what the number is, round it up to a whole > + number of pages. */ > + size_t guardsize = roundup (MAX (2 * stacksize, 1024 * 1024), pagesize); > + size_t alloc_size = guardsize + stacksize + guardsize; > + /* Use MAP_NORESERVE so that RAM will not be wasted on the guard > + bands; touch all the pages of the actual stack before returning, > + so we know they are allocated. */ > + void *alloc_base = xmmap (0, > + alloc_size, > + PROT_NONE, > + MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE|MAP_STACK, > + -1); > + /* PF_X can be overridden if PT_GNU_STACK is present. */ > + int prot = PROT_READ | PROT_WRITE > + | (DEFAULT_STACK_PERMS & PF_X ? PROT_EXEC : 0); > + xmprotect (alloc_base + guardsize, stacksize, prot); > + memset (alloc_base + guardsize, 0xA5, stacksize); > + return (struct support_stack) { alloc_base + guardsize, stacksize, guardsize }; This doesn't handle different stack growth directions. Thanks, Florian