From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id 883C43858D20; Fri, 14 Apr 2023 14:41:40 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 883C43858D20 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=foss.arm.com Authentication-Results: sourceware.org; spf=none smtp.mailfrom=foss.arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AED592F4; Fri, 14 Apr 2023 07:42:24 -0700 (PDT) Received: from [10.2.78.76] (unknown [10.2.78.76]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8095D3F6C4; Fri, 14 Apr 2023 07:41:39 -0700 (PDT) Message-ID: <5947697c-274f-58a7-af02-00618691021d@foss.arm.com> Date: Fri, 14 Apr 2023 15:41:38 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.9.0 Subject: Re: Threat model for GNU Binutils Content-Language: en-GB To: Siddhesh Poyarekar , Binutils Mailing List , gdb@sourceware.org Cc: Nick Clifton References: <032c1307-c143-3f2c-0502-683d966f0257@foss.arm.com> <78f3e6a6-dec2-3aa2-d1b6-935d842add1e@gotplt.org> From: Richard Earnshaw In-Reply-To: <78f3e6a6-dec2-3aa2-d1b6-935d842add1e@gotplt.org> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-3490.1 required=5.0 tests=BAYES_00,KAM_ASCII_DIVIDERS,KAM_DMARC_STATUS,KAM_LAZY_DOMAIN_SECURITY,NICE_REPLY_A,SPF_HELO_NONE,SPF_NONE,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: On 14/04/2023 15:08, Siddhesh Poyarekar wrote: > On 2023-04-14 09:12, Richard Earnshaw wrote: >> OK, I think it's time to take a step back. >> >> If we are to have a security policy, I think we first need a threat >> model.  Without it, we can't really argue about what we're trying to >> protect against. >> >> So the attached is my initial stab at trying to write down a threat >> model.  Some of this is subjective, but I'm trying to be reasonably >> realistic.  Most of these threats are really quite low in comparison >> to other tools and services that run on your computer. >> >> In practice, you then take the model and the impact/likelihood matrix >> and decide what level of actions are needed for each combination - >> whether it be from pre-emptive auditing through fixing bugs if found >> down to do nothing.   But that's the step after we have the model agreed. >> >> If you can think of threats I've missed (quite likely, I haven't >> thought about this for long enough), then please suggest additions. > > I assume you're proposing that this be added to SECURITY.md or similar? > There are overlaps with what we intend for the first part of SECURITY.md. I'm suggesting it live alongside it. It's the basis upon which SECURITY.md is derived. Think of this as the analysis and SECURITY.md as the policy for dealing with the threats. > >> Threat model for GNU Binutils >> ============================= >> >> The following potential security threats have been identified in GNU >> Binutils.  Note that this does not mean that such a vulnerability is >> known to exist. > > A threat model should define the nature of inputs because that makes the > difference between something being considered a security threat vs being > a regular bug. > >> Threats arising from execution of the GNU Binutils programs >> ----------------------------------------------------------- >> >> 1) Privilege escalation. >> >>   Nature: >>   A bug in the tools allows the user to gain privileges that they did not >>   already have. >> >>   Likelihood: Low - tools do not run with elevated privileges, so this >>   would most likely involve a bug in the kernel. > > A more general threat is crossing of privilege boundaries, which is not > only user -> root but user1 -> user2.  So this won't necessarily involve > kernel bugs. > >>   Impact: Critical > > Impact for security issues is done on a bug by bug basis, so stating > impact doesn't really make sense On the contrary, the point is to estimate the risks and the scale of the potential damage if such a bug were to exist; this can then be used to determine how much pre-emptive work is needed to guard against it. Saying that we won't consider it until it happens is not helpful. > >> >>   Mitigation: None > > Sandboxing is the answer for everything :) This threat is about the ability to escape a sandbox (eg linux user accounts are a sandbox of sorts). So putting something in a sandbox if you can escape it is pointless. Furthermore, if a bug of this nature exists in the tools then it doesn't need a remote actor, potentially a malicious user on the machine can exploit it to access things they are not supposed to in the standard system security model. > >> 2) Denial of service >> >>   Nature: >>   A bug in the tools leads to resources in the system becoming >>   unavailable on a temporary or permanent basis > > The answer here changes based on whether the input is trusted or not. Not necessarily. If the bug could bring down the machine then it's down to whether the user is trusted or not. Admittedly, there are probably plenty of ways to do this without needing binutils, but that's beyond the scope of this discussion. > >> >>   Likelihood: Low >> >>   Impact: Low - tools are normally run under local user control and >>   not as daemons. >> >>   Mitigation: sandboxing if access to the tools from a third party is >>   needed (eg a web service). >> >> 3) Data corruption leads to uncontrolled program execution. >> >>   Nature: >>   A bug such as unconstrained buffer overflow could lead to a ROP or JOP >>   style attack if not fully contained.  Once in control an attacker >>   might be able to access any file that the user running the program has >>   access to. > > Likewise. > >> >>   Likelihood: Moderate >> >>   Impact: High >> >>   Mitigation: sandboxing can help if an attacker has direct control >>   over inputs supplied to the tools or in cases where the inputs are >>   particularly untrustworthy, but is not practical during normal >>   usage. >> >> Threats arising from execution of output produced by GNU Binutils >> programs >> -------------------------------------------------------------------------- >> >> Note for this category we explicitly exclude threats that exist in the >> input files supplied to the tools and only consider threats introduced >> by the tools themselves. >> >> 1) Incorrect generation of machine instructions leads to unintended >> program behavior. >> >>   Nature: >>   Many architectures have 'don't care' bits in the machine instructions. >>   Generally the architecture will specify the value that such bits have, >>   leaving room for future expansion of the instruction set.  If tools do >>   not correctly set these bits then a program may execute correctly on >>   some machines, but fail on others. >> >>   Likelihood: Low >> >>   Impact: Moderate - this is unlikely to lead to an exploit, but might >> lead >>   to DoS in some cases. > > The impact in this case is context dependent, so the impact will vary > based on other factors, such as whether a PoC is available, how common > the vulnerable code pattern would be, etc. Generally speaking it's just a matter of work to get from a known buffer overrun to a PoC that shows an exploit, if the input is not trusted. The chances of a normal buffer overrun that is not done with malicious intent leading to a security issue directly is incredibly low. > >> >>   Mitigation: cross testing generated output against third-party >> toolchain >>   implementations. >> >> 2) Code directly generated by the tools contains a vulnerability >> >>   Nature: >>   The vast majority of code output from the tools comes from the input >>   files supplied, but a small amount of 'glue' code might be needed in >>   some cases, for example to enable jumping to another function in >>   another part of the address space.  Linkers are also sometimes asked >>   to inject mitigations for known CPU errata when this cannot be done >>   during the compilation phase. > > Since you've split this one out from machine instructions, there's a > third category too; where binutils tools generate incorrect code for > alignment of sections, sizes of sections, etc.  There's also a (rare) > possibility of an infrequently used instruction having incorrect opcode > mapping, resulting in a bug being masked when dumped with objdump or > resulting code having undefined behaviour. > Well I did say that I might have missed some additional threats, this is a WIP :) If you think additional cases need to be added, then go ahead. >> >>   Likelihood: low >> >>   Impact: mostly low - the amount of code generated is very small and >>   unlikely to involve buffers that contain risky data, so the chances of >>   this directly leading to a vulnerability is low. >> >>   Mitigation: monitor for processor vendor vulnerabilities and adjust >> tool >>   code generation if needed. > > Sid R.