public inbox for binutils@sourceware.org
 help / color / mirror / Atom feed
* Threat model for GNU Binutils
@ 2023-04-14 13:12 Richard Earnshaw
  2023-04-14 14:08 ` Siddhesh Poyarekar
  0 siblings, 1 reply; 6+ messages in thread
From: Richard Earnshaw @ 2023-04-14 13:12 UTC (permalink / raw)
  To: Binutils Mailing List, gdb; +Cc: Siddhesh Poyarekar, Nick Clifton

[-- Attachment #1: Type: text/plain, Size: 886 bytes --]

OK, I think it's time to take a step back.

If we are to have a security policy, I think we first need a threat 
model.  Without it, we can't really argue about what we're trying to 
protect against.

So the attached is my initial stab at trying to write down a threat 
model.  Some of this is subjective, but I'm trying to be reasonably 
realistic.  Most of these threats are really quite low in comparison to 
other tools and services that run on your computer.

In practice, you then take the model and the impact/likelihood matrix 
and decide what level of actions are needed for each combination - 
whether it be from pre-emptive auditing through fixing bugs if found 
down to do nothing.   But that's the step after we have the model agreed.

If you can think of threats I've missed (quite likely, I haven't thought 
about this for long enough), then please suggest additions.

R.

[-- Attachment #2: binutils-threats.txt --]
[-- Type: text/plain, Size: 3262 bytes --]

Threat model for GNU Binutils
=============================

The following potential security threats have been identified in GNU
Binutils.  Note that this does not mean that such a vulnerability is
known to exist.

Threats arising from execution of the GNU Binutils programs
-----------------------------------------------------------

1) Privilege escalation.

  Nature:
  A bug in the tools allows the user to gain privileges that they did not
  already have.

  Likelihood: Low - tools do not run with elevated privileges, so this
  would most likely involve a bug in the kernel.

  Impact: Critical

  Mitigation: None

2) Denial of service

  Nature:
  A bug in the tools leads to resources in the system becoming
  unavailable on a temporary or permanent basis

  Likelihood: Low

  Impact: Low - tools are normally run under local user control and
  not as daemons.

  Mitigation: sandboxing if access to the tools from a third party is
  needed (eg a web service).

3) Data corruption leads to uncontrolled program execution.

  Nature:
  A bug such as unconstrained buffer overflow could lead to a ROP or JOP
  style attack if not fully contained.  Once in control an attacker
  might be able to access any file that the user running the program has
  access to.

  Likelihood: Moderate

  Impact: High

  Mitigation: sandboxing can help if an attacker has direct control
  over inputs supplied to the tools or in cases where the inputs are
  particularly untrustworthy, but is not practical during normal
  usage.

Threats arising from execution of output produced by GNU Binutils programs
--------------------------------------------------------------------------

Note for this category we explicitly exclude threats that exist in the
input files supplied to the tools and only consider threats introduced
by the tools themselves.

1) Incorrect generation of machine instructions leads to unintended
program behavior.

  Nature:
  Many architectures have 'don't care' bits in the machine instructions.
  Generally the architecture will specify the value that such bits have,
  leaving room for future expansion of the instruction set.  If tools do
  not correctly set these bits then a program may execute correctly on
  some machines, but fail on others.

  Likelihood: Low

  Impact: Moderate - this is unlikely to lead to an exploit, but might lead
  to DoS in some cases.

  Mitigation: cross testing generated output against third-party toolchain
  implementations.

2) Code directly generated by the tools contains a vulnerability

  Nature:
  The vast majority of code output from the tools comes from the input
  files supplied, but a small amount of 'glue' code might be needed in
  some cases, for example to enable jumping to another function in
  another part of the address space.  Linkers are also sometimes asked
  to inject mitigations for known CPU errata when this cannot be done
  during the compilation phase.

  Likelihood: low

  Impact: mostly low - the amount of code generated is very small and
  unlikely to involve buffers that contain risky data, so the chances of
  this directly leading to a vulnerability is low.

  Mitigation: monitor for processor vendor vulnerabilities and adjust tool
  code generation if needed.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Threat model for GNU Binutils
  2023-04-14 13:12 Threat model for GNU Binutils Richard Earnshaw
@ 2023-04-14 14:08 ` Siddhesh Poyarekar
  2023-04-14 14:41   ` Richard Earnshaw
  2023-04-14 15:07   ` Richard Earnshaw
  0 siblings, 2 replies; 6+ messages in thread
From: Siddhesh Poyarekar @ 2023-04-14 14:08 UTC (permalink / raw)
  To: Richard Earnshaw, Binutils Mailing List, gdb; +Cc: Nick Clifton

On 2023-04-14 09:12, Richard Earnshaw wrote:
> OK, I think it's time to take a step back.
> 
> If we are to have a security policy, I think we first need a threat 
> model.  Without it, we can't really argue about what we're trying to 
> protect against.
> 
> So the attached is my initial stab at trying to write down a threat 
> model.  Some of this is subjective, but I'm trying to be reasonably 
> realistic.  Most of these threats are really quite low in comparison to 
> other tools and services that run on your computer.
> 
> In practice, you then take the model and the impact/likelihood matrix 
> and decide what level of actions are needed for each combination - 
> whether it be from pre-emptive auditing through fixing bugs if found 
> down to do nothing.   But that's the step after we have the model agreed.
> 
> If you can think of threats I've missed (quite likely, I haven't thought 
> about this for long enough), then please suggest additions.

I assume you're proposing that this be added to SECURITY.md or similar? 
There are overlaps with what we intend for the first part of SECURITY.md.

> Threat model for GNU Binutils
> =============================
> 
> The following potential security threats have been identified in GNU
> Binutils.  Note that this does not mean that such a vulnerability is
> known to exist.

A threat model should define the nature of inputs because that makes the 
difference between something being considered a security threat vs being 
a regular bug.

> Threats arising from execution of the GNU Binutils programs
> -----------------------------------------------------------
> 
> 1) Privilege escalation.
> 
>   Nature:
>   A bug in the tools allows the user to gain privileges that they did not
>   already have.
> 
>   Likelihood: Low - tools do not run with elevated privileges, so this
>   would most likely involve a bug in the kernel.

A more general threat is crossing of privilege boundaries, which is not 
only user -> root but user1 -> user2.  So this won't necessarily involve 
kernel bugs.

>   Impact: Critical

Impact for security issues is done on a bug by bug basis, so stating 
impact doesn't really make sense.

> 
>   Mitigation: None

Sandboxing is the answer for everything :)

> 2) Denial of service
> 
>   Nature:
>   A bug in the tools leads to resources in the system becoming
>   unavailable on a temporary or permanent basis

The answer here changes based on whether the input is trusted or not.

> 
>   Likelihood: Low
> 
>   Impact: Low - tools are normally run under local user control and
>   not as daemons.
> 
>   Mitigation: sandboxing if access to the tools from a third party is
>   needed (eg a web service).
> 
> 3) Data corruption leads to uncontrolled program execution.
> 
>   Nature:
>   A bug such as unconstrained buffer overflow could lead to a ROP or JOP
>   style attack if not fully contained.  Once in control an attacker
>   might be able to access any file that the user running the program has
>   access to.

Likewise.

> 
>   Likelihood: Moderate
> 
>   Impact: High
> 
>   Mitigation: sandboxing can help if an attacker has direct control
>   over inputs supplied to the tools or in cases where the inputs are
>   particularly untrustworthy, but is not practical during normal
>   usage.
> 
> Threats arising from execution of output produced by GNU Binutils programs
> --------------------------------------------------------------------------
> 
> Note for this category we explicitly exclude threats that exist in the
> input files supplied to the tools and only consider threats introduced
> by the tools themselves.
> 
> 1) Incorrect generation of machine instructions leads to unintended
> program behavior.
> 
>   Nature:
>   Many architectures have 'don't care' bits in the machine instructions.
>   Generally the architecture will specify the value that such bits have,
>   leaving room for future expansion of the instruction set.  If tools do
>   not correctly set these bits then a program may execute correctly on
>   some machines, but fail on others.
> 
>   Likelihood: Low
> 
>   Impact: Moderate - this is unlikely to lead to an exploit, but might lead
>   to DoS in some cases.

The impact in this case is context dependent, so the impact will vary 
based on other factors, such as whether a PoC is available, how common 
the vulnerable code pattern would be, etc.

> 
>   Mitigation: cross testing generated output against third-party toolchain
>   implementations.
> 
> 2) Code directly generated by the tools contains a vulnerability
> 
>   Nature:
>   The vast majority of code output from the tools comes from the input
>   files supplied, but a small amount of 'glue' code might be needed in
>   some cases, for example to enable jumping to another function in
>   another part of the address space.  Linkers are also sometimes asked
>   to inject mitigations for known CPU errata when this cannot be done
>   during the compilation phase.

Since you've split this one out from machine instructions, there's a 
third category too; where binutils tools generate incorrect code for 
alignment of sections, sizes of sections, etc.  There's also a (rare) 
possibility of an infrequently used instruction having incorrect opcode 
mapping, resulting in a bug being masked when dumped with objdump or 
resulting code having undefined behaviour.

> 
>   Likelihood: low
> 
>   Impact: mostly low - the amount of code generated is very small and
>   unlikely to involve buffers that contain risky data, so the chances of
>   this directly leading to a vulnerability is low.
> 
>   Mitigation: monitor for processor vendor vulnerabilities and adjust tool
>   code generation if needed.

Sid

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Threat model for GNU Binutils
  2023-04-14 14:08 ` Siddhesh Poyarekar
@ 2023-04-14 14:41   ` Richard Earnshaw
  2023-04-17 16:17     ` Siddhesh Poyarekar
  2023-04-14 15:07   ` Richard Earnshaw
  1 sibling, 1 reply; 6+ messages in thread
From: Richard Earnshaw @ 2023-04-14 14:41 UTC (permalink / raw)
  To: Siddhesh Poyarekar, Binutils Mailing List, gdb; +Cc: Nick Clifton



On 14/04/2023 15:08, Siddhesh Poyarekar wrote:
> On 2023-04-14 09:12, Richard Earnshaw wrote:
>> OK, I think it's time to take a step back.
>>
>> If we are to have a security policy, I think we first need a threat 
>> model.  Without it, we can't really argue about what we're trying to 
>> protect against.
>>
>> So the attached is my initial stab at trying to write down a threat 
>> model.  Some of this is subjective, but I'm trying to be reasonably 
>> realistic.  Most of these threats are really quite low in comparison 
>> to other tools and services that run on your computer.
>>
>> In practice, you then take the model and the impact/likelihood matrix 
>> and decide what level of actions are needed for each combination - 
>> whether it be from pre-emptive auditing through fixing bugs if found 
>> down to do nothing.   But that's the step after we have the model agreed.
>>
>> If you can think of threats I've missed (quite likely, I haven't 
>> thought about this for long enough), then please suggest additions.
> 
> I assume you're proposing that this be added to SECURITY.md or similar? 
> There are overlaps with what we intend for the first part of SECURITY.md.

I'm suggesting it live alongside it.  It's the basis upon which 
SECURITY.md is derived.  Think of this as the analysis and SECURITY.md 
as the policy for dealing with the threats.

> 
>> Threat model for GNU Binutils
>> =============================
>>
>> The following potential security threats have been identified in GNU
>> Binutils.  Note that this does not mean that such a vulnerability is
>> known to exist.
> 
> A threat model should define the nature of inputs because that makes the 
> difference between something being considered a security threat vs being 
> a regular bug.
> 
>> Threats arising from execution of the GNU Binutils programs
>> -----------------------------------------------------------
>>
>> 1) Privilege escalation.
>>
>>   Nature:
>>   A bug in the tools allows the user to gain privileges that they did not
>>   already have.
>>
>>   Likelihood: Low - tools do not run with elevated privileges, so this
>>   would most likely involve a bug in the kernel.
> 
> A more general threat is crossing of privilege boundaries, which is not 
> only user -> root but user1 -> user2.  So this won't necessarily involve 
> kernel bugs.
> 
>>   Impact: Critical
> 
> Impact for security issues is done on a bug by bug basis, so stating 
> impact doesn't really make sense

On the contrary, the point is to estimate the risks and the scale of the 
potential damage if such a bug were to exist; this can then be used to 
determine how much pre-emptive work is needed to guard against it. 
Saying that we won't consider it until it happens is not helpful.

> 
>>
>>   Mitigation: None
> 
> Sandboxing is the answer for everything :)

This threat is about the ability to escape a sandbox (eg linux user 
accounts are a sandbox of sorts).  So putting something in a sandbox if 
you can escape it is pointless.  Furthermore, if a bug of this nature 
exists in the tools then it doesn't need a remote actor, potentially a 
malicious user on the machine can exploit it to access things they are 
not supposed to in the standard system security model.

> 
>> 2) Denial of service
>>
>>   Nature:
>>   A bug in the tools leads to resources in the system becoming
>>   unavailable on a temporary or permanent basis
> 
> The answer here changes based on whether the input is trusted or not.

Not necessarily.  If the bug could bring down the machine then it's down 
to whether the user is trusted or not.  Admittedly, there are probably 
plenty of ways to do this without needing binutils, but that's beyond 
the scope of this discussion.

> 
>>
>>   Likelihood: Low
>>
>>   Impact: Low - tools are normally run under local user control and
>>   not as daemons.
>>
>>   Mitigation: sandboxing if access to the tools from a third party is
>>   needed (eg a web service).
>>
>> 3) Data corruption leads to uncontrolled program execution.
>>
>>   Nature:
>>   A bug such as unconstrained buffer overflow could lead to a ROP or JOP
>>   style attack if not fully contained.  Once in control an attacker
>>   might be able to access any file that the user running the program has
>>   access to.
> 
> Likewise.
> 
>>
>>   Likelihood: Moderate
>>
>>   Impact: High
>>
>>   Mitigation: sandboxing can help if an attacker has direct control
>>   over inputs supplied to the tools or in cases where the inputs are
>>   particularly untrustworthy, but is not practical during normal
>>   usage.
>>
>> Threats arising from execution of output produced by GNU Binutils 
>> programs
>> --------------------------------------------------------------------------
>>
>> Note for this category we explicitly exclude threats that exist in the
>> input files supplied to the tools and only consider threats introduced
>> by the tools themselves.
>>
>> 1) Incorrect generation of machine instructions leads to unintended
>> program behavior.
>>
>>   Nature:
>>   Many architectures have 'don't care' bits in the machine instructions.
>>   Generally the architecture will specify the value that such bits have,
>>   leaving room for future expansion of the instruction set.  If tools do
>>   not correctly set these bits then a program may execute correctly on
>>   some machines, but fail on others.
>>
>>   Likelihood: Low
>>
>>   Impact: Moderate - this is unlikely to lead to an exploit, but might 
>> lead
>>   to DoS in some cases.
> 
> The impact in this case is context dependent, so the impact will vary 
> based on other factors, such as whether a PoC is available, how common 
> the vulnerable code pattern would be, etc.

Generally speaking it's just a matter of work to get from a known buffer 
overrun to a PoC that shows an exploit, if the input is not trusted. 
The chances of a normal buffer overrun that is not done with malicious 
intent leading to a security issue directly is incredibly low.

> 
>>
>>   Mitigation: cross testing generated output against third-party 
>> toolchain
>>   implementations.
>>
>> 2) Code directly generated by the tools contains a vulnerability
>>
>>   Nature:
>>   The vast majority of code output from the tools comes from the input
>>   files supplied, but a small amount of 'glue' code might be needed in
>>   some cases, for example to enable jumping to another function in
>>   another part of the address space.  Linkers are also sometimes asked
>>   to inject mitigations for known CPU errata when this cannot be done
>>   during the compilation phase.
> 
> Since you've split this one out from machine instructions, there's a 
> third category too; where binutils tools generate incorrect code for 
> alignment of sections, sizes of sections, etc.  There's also a (rare) 
> possibility of an infrequently used instruction having incorrect opcode 
> mapping, resulting in a bug being masked when dumped with objdump or 
> resulting code having undefined behaviour.
> 

Well I did say that I might have missed some additional threats, this is 
a WIP :)

If you think additional cases need to be added, then go ahead.

>>
>>   Likelihood: low
>>
>>   Impact: mostly low - the amount of code generated is very small and
>>   unlikely to involve buffers that contain risky data, so the chances of
>>   this directly leading to a vulnerability is low.
>>
>>   Mitigation: monitor for processor vendor vulnerabilities and adjust 
>> tool
>>   code generation if needed.
> 
> Sid

R.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Threat model for GNU Binutils
  2023-04-14 14:08 ` Siddhesh Poyarekar
  2023-04-14 14:41   ` Richard Earnshaw
@ 2023-04-14 15:07   ` Richard Earnshaw
  1 sibling, 0 replies; 6+ messages in thread
From: Richard Earnshaw @ 2023-04-14 15:07 UTC (permalink / raw)
  To: Siddhesh Poyarekar, Binutils Mailing List, gdb; +Cc: Nick Clifton



On 14/04/2023 15:08, Siddhesh Poyarekar wrote:
> There's also a (rare) possibility of an infrequently used instruction 
> having incorrect opcode mapping, resulting in a bug being masked when 
> dumped with objdump or resulting code having undefined behaviour.

The best way to deal with this risk is to run test binaries generated by 
the tools through an independently developed toolchain; something I 
mentioned in the mitigation section.  The chances of a common-mode 
failure leading to the same bug in both sets of tools is very low.

R.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Threat model for GNU Binutils
  2023-04-14 14:41   ` Richard Earnshaw
@ 2023-04-17 16:17     ` Siddhesh Poyarekar
  2023-04-17 16:22       ` Siddhesh Poyarekar
  0 siblings, 1 reply; 6+ messages in thread
From: Siddhesh Poyarekar @ 2023-04-17 16:17 UTC (permalink / raw)
  To: Richard Earnshaw, Binutils Mailing List, gdb; +Cc: Nick Clifton

On 2023-04-14 10:41, Richard Earnshaw wrote:
>>>   Impact: Critical
>>
>> Impact for security issues is done on a bug by bug basis, so stating 
>> impact doesn't really make sense
> 
> On the contrary, the point is to estimate the risks and the scale of the 
> potential damage if such a bug were to exist; this can then be used to 
> determine how much pre-emptive work is needed to guard against it. 
> Saying that we won't consider it until it happens is not helpful.

Then this needs text describing the context under which this impact is 
rated and clearly dissociating itself from CVE ratings because given the 
current state of CVE assignment, bots will assign whatever rating we put 
here regardless of the actual nature of the flaw.

>>>   Mitigation: None
>>
>> Sandboxing is the answer for everything :)
> 
> This threat is about the ability to escape a sandbox (eg linux user 
> accounts are a sandbox of sorts).  So putting something in a sandbox if 
> you can escape it is pointless.  Furthermore, if a bug of this nature 
> exists in the tools then it doesn't need a remote actor, potentially a 
> malicious user on the machine can exploit it to access things they are 
> not supposed to in the standard system security model.

They're two different CWEs, one is CWE-693 (Protection mechanism 
failure) and another CWE-269 (Improper Privilege Management).  There are 
others, like CWE-648 or CWE-271 that may be relevant if you want to 
break this threat down.

>>
>>> 2) Denial of service
>>>
>>>   Nature:
>>>   A bug in the tools leads to resources in the system becoming
>>>   unavailable on a temporary or permanent basis
>>
>> The answer here changes based on whether the input is trusted or not.
> 
> Not necessarily.  If the bug could bring down the machine then it's down 
> to whether the user is trusted or not.  Admittedly, there are probably 
> plenty of ways to do this without needing binutils, but that's beyond 
> the scope of this discussion.

It is within scope, because binutils ships libbfd, libopcodes, etc. that 
other applications may link against.  Depending on their usage context, 
the nature of input will make a significant difference on how to treat a 
DoS.

>>>   Likelihood: Low
>>>
>>>   Impact: Low - tools are normally run under local user control and
>>>   not as daemons.
>>>
>>>   Mitigation: sandboxing if access to the tools from a third party is
>>>   needed (eg a web service).
>>>
>>> 3) Data corruption leads to uncontrolled program execution.
>>>
>>>   Nature:
>>>   A bug such as unconstrained buffer overflow could lead to a ROP or JOP
>>>   style attack if not fully contained.  Once in control an attacker
>>>   might be able to access any file that the user running the program has
>>>   access to.
>>
>> Likewise.
>>
>>>
>>>   Likelihood: Moderate
>>>
>>>   Impact: High
>>>
>>>   Mitigation: sandboxing can help if an attacker has direct control
>>>   over inputs supplied to the tools or in cases where the inputs are
>>>   particularly untrustworthy, but is not practical during normal
>>>   usage.
>>>
>>> Threats arising from execution of output produced by GNU Binutils 
>>> programs
>>> --------------------------------------------------------------------------
>>>
>>> Note for this category we explicitly exclude threats that exist in the
>>> input files supplied to the tools and only consider threats introduced
>>> by the tools themselves.
>>>
>>> 1) Incorrect generation of machine instructions leads to unintended
>>> program behavior.
>>>
>>>   Nature:
>>>   Many architectures have 'don't care' bits in the machine instructions.
>>>   Generally the architecture will specify the value that such bits have,
>>>   leaving room for future expansion of the instruction set.  If tools do
>>>   not correctly set these bits then a program may execute correctly on
>>>   some machines, but fail on others.
>>>
>>>   Likelihood: Low
>>>
>>>   Impact: Moderate - this is unlikely to lead to an exploit, but 
>>> might lead
>>>   to DoS in some cases.
>>
>> The impact in this case is context dependent, so the impact will vary 
>> based on other factors, such as whether a PoC is available, how common 
>> the vulnerable code pattern would be, etc.
> 
> Generally speaking it's just a matter of work to get from a known buffer 
> overrun to a PoC that shows an exploit, if the input is not trusted. The 
> chances of a normal buffer overrun that is not done with malicious 
> intent leading to a security issue directly is incredibly low.

I was referring specifically to what currently is the accepted industry 
standard for rating CVEs, to help vendors determine how urgently they 
need to fix a bug, i.e. within days or within weeks.  See the CVSSv3 
calculator[1] as an example of how one would rate CVEs.  Please note 
though, that security teams of different projects/vendors tend to put 
their own subjective sauce over these ratings that make the ratings more 
specific.  This is why a vendor rating[2] may be different from the 
rating of, e.g. NVD.

>>
>>>
>>>   Mitigation: cross testing generated output against third-party 
>>> toolchain
>>>   implementations.
>>>
>>> 2) Code directly generated by the tools contains a vulnerability
>>>
>>>   Nature:
>>>   The vast majority of code output from the tools comes from the input
>>>   files supplied, but a small amount of 'glue' code might be needed in
>>>   some cases, for example to enable jumping to another function in
>>>   another part of the address space.  Linkers are also sometimes asked
>>>   to inject mitigations for known CPU errata when this cannot be done
>>>   during the compilation phase.
>>
>> Since you've split this one out from machine instructions, there's a 
>> third category too; where binutils tools generate incorrect code for 
>> alignment of sections, sizes of sections, etc.  There's also a (rare) 
>> possibility of an infrequently used instruction having incorrect 
>> opcode mapping, resulting in a bug being masked when dumped with 
>> objdump or resulting code having undefined behaviour.
>>
> 
> Well I did say that I might have missed some additional threats, this is 
> a WIP :)
> 
> If you think additional cases need to be added, then go ahead.

The text doesn't take into consideration the fact that binutils ships 
libraries as well and that they could be used in a context independent 
manner, meaning that their threat model will depend on the definition of 
how we support using those libraries.

This means definition of the contexts under which these libraries are 
supported for usage, unless we want to support any and all kinds of use. 
  The glibc security exceptions[3] are a good example of how this is 
typically done.

Thanks,
Sid

[1] https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator
[2] https://access.redhat.com/security/updates/classification
[3] https://sourceware.org/glibc/wiki/Security%20Exceptions

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Threat model for GNU Binutils
  2023-04-17 16:17     ` Siddhesh Poyarekar
@ 2023-04-17 16:22       ` Siddhesh Poyarekar
  0 siblings, 0 replies; 6+ messages in thread
From: Siddhesh Poyarekar @ 2023-04-17 16:22 UTC (permalink / raw)
  To: Richard Earnshaw, Binutils Mailing List, gdb; +Cc: Nick Clifton

On 2023-04-17 12:17, Siddhesh Poyarekar wrote:
>   The glibc security exceptions[3] are a good example of how this is 
> typically done.

I should clarify that as far as I am aware, the glibc security 
exceptions are not a result of a threat modeling exercise; I quoted it 
as an example of how one could define contexts under which library 
interfaces may be supported for use.

Sid

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2023-04-17 16:22 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-14 13:12 Threat model for GNU Binutils Richard Earnshaw
2023-04-14 14:08 ` Siddhesh Poyarekar
2023-04-14 14:41   ` Richard Earnshaw
2023-04-17 16:17     ` Siddhesh Poyarekar
2023-04-17 16:22       ` Siddhesh Poyarekar
2023-04-14 15:07   ` Richard Earnshaw

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).