public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [RFC] GCC Security policy
@ 2023-08-07 17:29 David Edelsohn
  2023-08-08  8:16 ` Richard Biener
                   ` (3 more replies)
  0 siblings, 4 replies; 72+ messages in thread
From: David Edelsohn @ 2023-08-07 17:29 UTC (permalink / raw)
  To: GCC Patches; +Cc: Siddhesh Poyarekar, Carlos O'Donell

[-- Attachment #1: Type: text/plain, Size: 3171 bytes --]

FOSS Best Practices recommends that projects have an official Security
policy stated in a SECURITY.md or SECURITY.txt file at the root of the
repository.  GLIBC and Binutils have added such documents.

Appended is a prototype for a Security policy file for GCC based on the
Binutils document because GCC seems to have more affinity with Binutils as
a tool. Do the runtime libraries distributed with GCC, especially libgcc,
require additional security policies?

[ ] Is it appropriate to use the Binutils SECURITY.txt as the starting
point or should GCC use GLIBC SECURITY.md as the starting point for the GCC
Security policy?

[ ] Does GCC, or some components of GCC, require additional care because of
runtime libraries like libgcc and libstdc++, and because of gcov and
profile-directed feedback?

Thoughts?

Thanks, David

GCC Security Process
====================

What is a GCC security bug?
===========================

    A security bug is one that threatens the security of a system or
    network, or might compromise the security of data stored on it.
    In the context of GCC there are two ways in which such
    bugs might occur.  In the first, the programs themselves might be
    tricked into a direct compromise of security.  In the second, the
    tools might introduce a vulnerability in the generated output that
    was not already present in the files used as input.

    Other than that, all other bugs will be treated as non-security
    issues.  This does not mean that they will be ignored, just that
    they will not be given the priority that is given to security bugs.

    This stance applies to the creation tools in the GCC (e.g.,
    gcc, g++, gfortran, gccgo, gccrs, gnat, cpp, gcov, etc.) and the
    libraries that they use.

Notes:
======

    None of the programs in GCC need elevated privileges to operate and
    it is recommended that users do not use them from accounts where such
    privileges are automatically available.

Reporting private security bugs
========================

   *All bugs reported in the GCC Bugzilla are public.*

   In order to report a private security bug that is not immediately
   public, please contact one of the downstream distributions with
   security teams.  The following teams have volunteered to handle
   such bugs:

      Debian:  security@debian.org
      Red Hat: secalert@redhat.com
      SUSE:    security@suse.de

   Please report the bug to just one of these teams.  It will be shared
   with other teams as necessary.

   The team contacted will take care of details such as vulnerability
   rating and CVE assignment (http://cve.mitre.org/about/).  It is likely
   that the team will ask to file a public bug because the issue is
   sufficiently minor and does not warrant an embargo.  An embargo is not
   a requirement for being credited with the discovery of a security
   vulnerability.

Reporting public security bugs
==============================

   It is expected that critical security bugs will be rare, and that most
   security bugs can be reported in GCC, thus making
   them public immediately.  The system can be found here:

      https://gcc.gnu.org/bugzilla/

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-07 17:29 [RFC] GCC Security policy David Edelsohn
@ 2023-08-08  8:16 ` Richard Biener
  2023-08-08 12:33   ` Siddhesh Poyarekar
  2023-08-14 13:26 ` Siddhesh Poyarekar
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 72+ messages in thread
From: Richard Biener @ 2023-08-08  8:16 UTC (permalink / raw)
  To: David Edelsohn; +Cc: GCC Patches, Siddhesh Poyarekar, Carlos O'Donell

On Mon, Aug 7, 2023 at 7:30 PM David Edelsohn via Gcc-patches
<gcc-patches@gcc.gnu.org> wrote:
>
> FOSS Best Practices recommends that projects have an official Security
> policy stated in a SECURITY.md or SECURITY.txt file at the root of the
> repository.  GLIBC and Binutils have added such documents.
>
> Appended is a prototype for a Security policy file for GCC based on the
> Binutils document because GCC seems to have more affinity with Binutils as
> a tool. Do the runtime libraries distributed with GCC, especially libgcc,
> require additional security policies?
>
> [ ] Is it appropriate to use the Binutils SECURITY.txt as the starting
> point or should GCC use GLIBC SECURITY.md as the starting point for the GCC
> Security policy?
>
> [ ] Does GCC, or some components of GCC, require additional care because of
> runtime libraries like libgcc and libstdc++, and because of gcov and
> profile-directed feedback?

I do think that the runtime libraries should at least be explicitly mentioned
because they fall into the "generated output" category and bugs in the
runtime are usually more severe as affecting a wider class of inputs.

> Thoughts?
>
> Thanks, David
>
> GCC Security Process
> ====================
>
> What is a GCC security bug?
> ===========================
>
>     A security bug is one that threatens the security of a system or
>     network, or might compromise the security of data stored on it.
>     In the context of GCC there are two ways in which such
>     bugs might occur.  In the first, the programs themselves might be
>     tricked into a direct compromise of security.  In the second, the
>     tools might introduce a vulnerability in the generated output that
>     was not already present in the files used as input.
>
>     Other than that, all other bugs will be treated as non-security
>     issues.  This does not mean that they will be ignored, just that
>     they will not be given the priority that is given to security bugs.
>
>     This stance applies to the creation tools in the GCC (e.g.,
>     gcc, g++, gfortran, gccgo, gccrs, gnat, cpp, gcov, etc.) and the
>     libraries that they use.
>
> Notes:
> ======
>
>     None of the programs in GCC need elevated privileges to operate and
>     it is recommended that users do not use them from accounts where such
>     privileges are automatically available.

I'll note that we could ourselves mitigate some of that by handling privileged
invocation of the driver specially, dropping privs on exec of the sibling tools
and possibly using temporary files or pipes to do the parts of the I/O that
need to be privileged.

> Reporting private security bugs
> ========================
>
>    *All bugs reported in the GCC Bugzilla are public.*
>
>    In order to report a private security bug that is not immediately
>    public, please contact one of the downstream distributions with
>    security teams.  The following teams have volunteered to handle
>    such bugs:
>
>       Debian:  security@debian.org
>       Red Hat: secalert@redhat.com
>       SUSE:    security@suse.de
>
>    Please report the bug to just one of these teams.  It will be shared
>    with other teams as necessary.
>
>    The team contacted will take care of details such as vulnerability
>    rating and CVE assignment (http://cve.mitre.org/about/).  It is likely
>    that the team will ask to file a public bug because the issue is
>    sufficiently minor and does not warrant an embargo.  An embargo is not
>    a requirement for being credited with the discovery of a security
>    vulnerability.
>
> Reporting public security bugs
> ==============================

Put this first, name it "Reporting security bugs"

>    It is expected that critical security bugs will be rare, and that most
>    security bugs can be reported in GCC, thus making
>    them public immediately.  The system can be found here:
>
>       https://gcc.gnu.org/bugzilla/

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-08  8:16 ` Richard Biener
@ 2023-08-08 12:33   ` Siddhesh Poyarekar
  2023-08-08 12:52     ` Richard Biener
  0 siblings, 1 reply; 72+ messages in thread
From: Siddhesh Poyarekar @ 2023-08-08 12:33 UTC (permalink / raw)
  To: Richard Biener, David Edelsohn; +Cc: GCC Patches, Carlos O'Donell

On 2023-08-08 04:16, Richard Biener wrote:
> On Mon, Aug 7, 2023 at 7:30 PM David Edelsohn via Gcc-patches
> <gcc-patches@gcc.gnu.org> wrote:
>>
>> FOSS Best Practices recommends that projects have an official Security
>> policy stated in a SECURITY.md or SECURITY.txt file at the root of the
>> repository.  GLIBC and Binutils have added such documents.
>>
>> Appended is a prototype for a Security policy file for GCC based on the
>> Binutils document because GCC seems to have more affinity with Binutils as
>> a tool. Do the runtime libraries distributed with GCC, especially libgcc,
>> require additional security policies?
>>
>> [ ] Is it appropriate to use the Binutils SECURITY.txt as the starting
>> point or should GCC use GLIBC SECURITY.md as the starting point for the GCC
>> Security policy?
>>
>> [ ] Does GCC, or some components of GCC, require additional care because of
>> runtime libraries like libgcc and libstdc++, and because of gcov and
>> profile-directed feedback?
> 
> I do think that the runtime libraries should at least be explicitly mentioned
> because they fall into the "generated output" category and bugs in the
> runtime are usually more severe as affecting a wider class of inputs.

Ack, I'd expect libstdc++ and libgcc to be aligned with glibc's 
policies.  libiberty and others on the other hand, would probably be 
more suitably aligned with binutils libbfd, where we assume trusted input.

>> Thoughts?
>>
>> Thanks, David
>>
>> GCC Security Process
>> ====================
>>
>> What is a GCC security bug?
>> ===========================
>>
>>      A security bug is one that threatens the security of a system or
>>      network, or might compromise the security of data stored on it.
>>      In the context of GCC there are two ways in which such
>>      bugs might occur.  In the first, the programs themselves might be
>>      tricked into a direct compromise of security.  In the second, the
>>      tools might introduce a vulnerability in the generated output that
>>      was not already present in the files used as input.
>>
>>      Other than that, all other bugs will be treated as non-security
>>      issues.  This does not mean that they will be ignored, just that
>>      they will not be given the priority that is given to security bugs.
>>
>>      This stance applies to the creation tools in the GCC (e.g.,
>>      gcc, g++, gfortran, gccgo, gccrs, gnat, cpp, gcov, etc.) and the
>>      libraries that they use.
>>
>> Notes:
>> ======
>>
>>      None of the programs in GCC need elevated privileges to operate and
>>      it is recommended that users do not use them from accounts where such
>>      privileges are automatically available.
> 
> I'll note that we could ourselves mitigate some of that by handling privileged
> invocation of the driver specially, dropping privs on exec of the sibling tools
> and possibly using temporary files or pipes to do the parts of the I/O that
> need to be privileged.

It's not a bad idea, but it ends up giving legitimizing running the 
compiler as root, pushing the responsibility of privilege management to 
the driver.  How about rejecting invocation as root altogether by 
default, bypassed with a --run-as-root flag instead?

I've also been thinking about a --sandbox flag that isolates the build 
process (for gcc as well as binutils) into a separate namespace so that 
it's usable in a restricted mode on untrusted sources without exposing 
the rest of the system to it.

Thanks,
Sid

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-08 12:33   ` Siddhesh Poyarekar
@ 2023-08-08 12:52     ` Richard Biener
  2023-08-08 13:01       ` Jakub Jelinek
  2024-02-09 15:38       ` Martin Jambor
  0 siblings, 2 replies; 72+ messages in thread
From: Richard Biener @ 2023-08-08 12:52 UTC (permalink / raw)
  To: Siddhesh Poyarekar; +Cc: David Edelsohn, GCC Patches, Carlos O'Donell

On Tue, Aug 8, 2023 at 2:33 PM Siddhesh Poyarekar <siddhesh@gotplt.org> wrote:
>
> On 2023-08-08 04:16, Richard Biener wrote:
> > On Mon, Aug 7, 2023 at 7:30 PM David Edelsohn via Gcc-patches
> > <gcc-patches@gcc.gnu.org> wrote:
> >>
> >> FOSS Best Practices recommends that projects have an official Security
> >> policy stated in a SECURITY.md or SECURITY.txt file at the root of the
> >> repository.  GLIBC and Binutils have added such documents.
> >>
> >> Appended is a prototype for a Security policy file for GCC based on the
> >> Binutils document because GCC seems to have more affinity with Binutils as
> >> a tool. Do the runtime libraries distributed with GCC, especially libgcc,
> >> require additional security policies?
> >>
> >> [ ] Is it appropriate to use the Binutils SECURITY.txt as the starting
> >> point or should GCC use GLIBC SECURITY.md as the starting point for the GCC
> >> Security policy?
> >>
> >> [ ] Does GCC, or some components of GCC, require additional care because of
> >> runtime libraries like libgcc and libstdc++, and because of gcov and
> >> profile-directed feedback?
> >
> > I do think that the runtime libraries should at least be explicitly mentioned
> > because they fall into the "generated output" category and bugs in the
> > runtime are usually more severe as affecting a wider class of inputs.
>
> Ack, I'd expect libstdc++ and libgcc to be aligned with glibc's
> policies.  libiberty and others on the other hand, would probably be
> more suitably aligned with binutils libbfd, where we assume trusted input.
>
> >> Thoughts?
> >>
> >> Thanks, David
> >>
> >> GCC Security Process
> >> ====================
> >>
> >> What is a GCC security bug?
> >> ===========================
> >>
> >>      A security bug is one that threatens the security of a system or
> >>      network, or might compromise the security of data stored on it.
> >>      In the context of GCC there are two ways in which such
> >>      bugs might occur.  In the first, the programs themselves might be
> >>      tricked into a direct compromise of security.  In the second, the
> >>      tools might introduce a vulnerability in the generated output that
> >>      was not already present in the files used as input.
> >>
> >>      Other than that, all other bugs will be treated as non-security
> >>      issues.  This does not mean that they will be ignored, just that
> >>      they will not be given the priority that is given to security bugs.
> >>
> >>      This stance applies to the creation tools in the GCC (e.g.,
> >>      gcc, g++, gfortran, gccgo, gccrs, gnat, cpp, gcov, etc.) and the
> >>      libraries that they use.
> >>
> >> Notes:
> >> ======
> >>
> >>      None of the programs in GCC need elevated privileges to operate and
> >>      it is recommended that users do not use them from accounts where such
> >>      privileges are automatically available.
> >
> > I'll note that we could ourselves mitigate some of that by handling privileged
> > invocation of the driver specially, dropping privs on exec of the sibling tools
> > and possibly using temporary files or pipes to do the parts of the I/O that
> > need to be privileged.
>
> It's not a bad idea, but it ends up giving legitimizing running the
> compiler as root, pushing the responsibility of privilege management to
> the driver.  How about rejecting invocation as root altogether by
> default, bypassed with a --run-as-root flag instead?
>
> I've also been thinking about a --sandbox flag that isolates the build
> process (for gcc as well as binutils) into a separate namespace so that
> it's usable in a restricted mode on untrusted sources without exposing
> the rest of the system to it.

There's probably external tools to do this, not sure if we should replicate
things in the driver for this.

But sure, I think the driver is the proper point to address any of such
issues - iff we want to address them at all.  Maybe a nice little
google summer-of-code project ;)

Richard.

>
> Thanks,
> Sid

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-08 12:52     ` Richard Biener
@ 2023-08-08 13:01       ` Jakub Jelinek
  2023-08-08 13:21         ` Richard Biener
                           ` (3 more replies)
  2024-02-09 15:38       ` Martin Jambor
  1 sibling, 4 replies; 72+ messages in thread
From: Jakub Jelinek @ 2023-08-08 13:01 UTC (permalink / raw)
  To: Richard Biener
  Cc: Siddhesh Poyarekar, David Edelsohn, GCC Patches, Carlos O'Donell

On Tue, Aug 08, 2023 at 02:52:57PM +0200, Richard Biener via Gcc-patches wrote:
> There's probably external tools to do this, not sure if we should replicate
> things in the driver for this.
> 
> But sure, I think the driver is the proper point to address any of such
> issues - iff we want to address them at all.  Maybe a nice little
> google summer-of-code project ;)

What I'd really like to avoid is having all compiler bugs (primarily ICEs)
considered to be security bugs (e.g. DoS category), it would be terrible to
release every week a new compiler because of the "security" issues.
Running compiler on untrusted sources can trigger ICEs (which we want to fix
but there will always be some), or run into some compile time and/or compile
memory issue (we have various quadratic or worse spots), compiler stack
limits (deeply nested stuff e.g. during parsing but other areas as well).
So, people running fuzzers and reporting issues is great, but if they'd get
a CVE assigned for each ice-on-invalid-code, ice-on-valid-code,
each compile-time-hog and each memory-hog, that wouldn't be useful.
Runtime libraries or security issues in the code we generate for valid
sources are of course a different thing.

	Jakub


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-08 13:01       ` Jakub Jelinek
@ 2023-08-08 13:21         ` Richard Biener
  2023-08-08 13:24         ` Michael Matz
                           ` (2 subsequent siblings)
  3 siblings, 0 replies; 72+ messages in thread
From: Richard Biener @ 2023-08-08 13:21 UTC (permalink / raw)
  To: Jakub Jelinek
  Cc: Siddhesh Poyarekar, David Edelsohn, GCC Patches, Carlos O'Donell

On Tue, Aug 8, 2023 at 3:01 PM Jakub Jelinek <jakub@redhat.com> wrote:
>
> On Tue, Aug 08, 2023 at 02:52:57PM +0200, Richard Biener via Gcc-patches wrote:
> > There's probably external tools to do this, not sure if we should replicate
> > things in the driver for this.
> >
> > But sure, I think the driver is the proper point to address any of such
> > issues - iff we want to address them at all.  Maybe a nice little
> > google summer-of-code project ;)
>
> What I'd really like to avoid is having all compiler bugs (primarily ICEs)
> considered to be security bugs (e.g. DoS category), it would be terrible to
> release every week a new compiler because of the "security" issues.
> Running compiler on untrusted sources can trigger ICEs (which we want to fix
> but there will always be some), or run into some compile time and/or compile
> memory issue (we have various quadratic or worse spots), compiler stack
> limits (deeply nested stuff e.g. during parsing but other areas as well).
> So, people running fuzzers and reporting issues is great, but if they'd get
> a CVE assigned for each ice-on-invalid-code, ice-on-valid-code,
> each compile-time-hog and each memory-hog, that wouldn't be useful.
> Runtime libraries or security issues in the code we generate for valid
> sources are of course a different thing.

We can only hope they get "confused" by our nice reporting of segfaults ...

Richard.

>         Jakub
>

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-08 13:01       ` Jakub Jelinek
  2023-08-08 13:21         ` Richard Biener
@ 2023-08-08 13:24         ` Michael Matz
  2023-08-08 13:33         ` Paul Koning
  2023-08-08 13:34         ` Ian Lance Taylor
  3 siblings, 0 replies; 72+ messages in thread
From: Michael Matz @ 2023-08-08 13:24 UTC (permalink / raw)
  To: Jakub Jelinek
  Cc: Richard Biener, Siddhesh Poyarekar, David Edelsohn, GCC Patches,
	Carlos O'Donell

Hello,

On Tue, 8 Aug 2023, Jakub Jelinek via Gcc-patches wrote:

> What I'd really like to avoid is having all compiler bugs (primarily ICEs)
> considered to be security bugs (e.g. DoS category), it would be terrible to
> release every week a new compiler because of the "security" issues.
> Running compiler on untrusted sources can trigger ICEs (which we want to fix
> but there will always be some), or run into some compile time and/or compile
> memory issue (we have various quadratic or worse spots), compiler stack
> limits (deeply nested stuff e.g. during parsing but other areas as well).
> So, people running fuzzers and reporting issues is great, but if they'd get
> a CVE assigned for each ice-on-invalid-code, ice-on-valid-code,
> each compile-time-hog and each memory-hog, that wouldn't be useful.

This!  Double-this!

FWIW, the binutils security policy, and by extension the proposed GCC 
policy David posted, handles this.  (To me this is the most important 
aspect of such policy, having been on the receiving end of such nonsense 
on the binutils side).

> Runtime libraries or security issues in the code we generate for valid
> sources are of course a different thing.

Generate or otherwise provide for consumption.  E.g. a bug with security 
consequences in the runtime libs (either in source form (templates) or as 
executable code, but with the problem being in e.g. libgcc sources 
(unwinder!)) needs proper handling, similar to how glibc is handled.


Ciao,
Michael.

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-08 13:01       ` Jakub Jelinek
  2023-08-08 13:21         ` Richard Biener
  2023-08-08 13:24         ` Michael Matz
@ 2023-08-08 13:33         ` Paul Koning
  2023-08-08 15:48           ` David Malcolm
  2023-08-08 13:34         ` Ian Lance Taylor
  3 siblings, 1 reply; 72+ messages in thread
From: Paul Koning @ 2023-08-08 13:33 UTC (permalink / raw)
  To: Jakub Jelinek
  Cc: Richard Biener, Siddhesh Poyarekar, David Edelsohn, GCC Patches,
	Carlos O'Donell



> On Aug 8, 2023, at 9:01 AM, Jakub Jelinek via Gcc-patches <gcc-patches@gcc.gnu.org> wrote:
> 
> On Tue, Aug 08, 2023 at 02:52:57PM +0200, Richard Biener via Gcc-patches wrote:
>> There's probably external tools to do this, not sure if we should replicate
>> things in the driver for this.
>> 
>> But sure, I think the driver is the proper point to address any of such
>> issues - iff we want to address them at all.  Maybe a nice little
>> google summer-of-code project ;)
> 
> What I'd really like to avoid is having all compiler bugs (primarily ICEs)
> considered to be security bugs (e.g. DoS category), it would be terrible to
> release every week a new compiler because of the "security" issues.

Indeed.  But my answer would be that such things are not DoS issues.  DoS means that an external input, over which you have little control, is impairing service.  In the case of a compiler, if feeding it bad source code X.c causes it to crash, the answer is "well, then don't do that".

	paul



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-08 13:01       ` Jakub Jelinek
                           ` (2 preceding siblings ...)
  2023-08-08 13:33         ` Paul Koning
@ 2023-08-08 13:34         ` Ian Lance Taylor
  2023-08-08 14:04           ` Richard Biener
  3 siblings, 1 reply; 72+ messages in thread
From: Ian Lance Taylor @ 2023-08-08 13:34 UTC (permalink / raw)
  To: Jakub Jelinek
  Cc: Richard Biener, Siddhesh Poyarekar, David Edelsohn, GCC Patches,
	Carlos O'Donell

On Tue, Aug 8, 2023 at 6:02 AM Jakub Jelinek via Gcc-patches
<gcc-patches@gcc.gnu.org> wrote:
>
> On Tue, Aug 08, 2023 at 02:52:57PM +0200, Richard Biener via Gcc-patches wrote:
> > There's probably external tools to do this, not sure if we should replicate
> > things in the driver for this.
> >
> > But sure, I think the driver is the proper point to address any of such
> > issues - iff we want to address them at all.  Maybe a nice little
> > google summer-of-code project ;)
>
> What I'd really like to avoid is having all compiler bugs (primarily ICEs)
> considered to be security bugs (e.g. DoS category), it would be terrible to
> release every week a new compiler because of the "security" issues.
> Running compiler on untrusted sources can trigger ICEs (which we want to fix
> but there will always be some), or run into some compile time and/or compile
> memory issue (we have various quadratic or worse spots), compiler stack
> limits (deeply nested stuff e.g. during parsing but other areas as well).
> So, people running fuzzers and reporting issues is great, but if they'd get
> a CVE assigned for each ice-on-invalid-code, ice-on-valid-code,
> each compile-time-hog and each memory-hog, that wouldn't be useful.
> Runtime libraries or security issues in the code we generate for valid
> sources are of course a different thing.


I wonder if a security policy should say something about the -fplugin
option.  I agree that an ICE is not a security issue, but I wonder how
many people are aware that a poorly chosen command line option can
direct the compiler to run arbitrary code.  For that matter the same
is true of setting the GCC_EXEC_PREFIX environment variable, and no
doubt several other environment variables.  My point is not that we
should change these, but that a security policy should draw attention
to the fact that there are cases in which the compiler will
unexpectedly run other programs.

Ian

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-08 13:34         ` Ian Lance Taylor
@ 2023-08-08 14:04           ` Richard Biener
  2023-08-08 14:06             ` Siddhesh Poyarekar
  0 siblings, 1 reply; 72+ messages in thread
From: Richard Biener @ 2023-08-08 14:04 UTC (permalink / raw)
  To: Ian Lance Taylor
  Cc: Jakub Jelinek, Siddhesh Poyarekar, David Edelsohn, GCC Patches,
	Carlos O'Donell

On Tue, Aug 8, 2023 at 3:35 PM Ian Lance Taylor <iant@google.com> wrote:
>
> On Tue, Aug 8, 2023 at 6:02 AM Jakub Jelinek via Gcc-patches
> <gcc-patches@gcc.gnu.org> wrote:
> >
> > On Tue, Aug 08, 2023 at 02:52:57PM +0200, Richard Biener via Gcc-patches wrote:
> > > There's probably external tools to do this, not sure if we should replicate
> > > things in the driver for this.
> > >
> > > But sure, I think the driver is the proper point to address any of such
> > > issues - iff we want to address them at all.  Maybe a nice little
> > > google summer-of-code project ;)
> >
> > What I'd really like to avoid is having all compiler bugs (primarily ICEs)
> > considered to be security bugs (e.g. DoS category), it would be terrible to
> > release every week a new compiler because of the "security" issues.
> > Running compiler on untrusted sources can trigger ICEs (which we want to fix
> > but there will always be some), or run into some compile time and/or compile
> > memory issue (we have various quadratic or worse spots), compiler stack
> > limits (deeply nested stuff e.g. during parsing but other areas as well).
> > So, people running fuzzers and reporting issues is great, but if they'd get
> > a CVE assigned for each ice-on-invalid-code, ice-on-valid-code,
> > each compile-time-hog and each memory-hog, that wouldn't be useful.
> > Runtime libraries or security issues in the code we generate for valid
> > sources are of course a different thing.
>
>
> I wonder if a security policy should say something about the -fplugin
> option.  I agree that an ICE is not a security issue, but I wonder how
> many people are aware that a poorly chosen command line option can
> direct the compiler to run arbitrary code.  For that matter the same
> is true of setting the GCC_EXEC_PREFIX environment variable, and no
> doubt several other environment variables.  My point is not that we
> should change these, but that a security policy should draw attention
> to the fact that there are cases in which the compiler will
> unexpectedly run other programs.

Well, if you run an arbitrary commandline from the internet you get
what you deserve, running "echo "Hello World" | gcc -xc - -o /dev/sda"
as root doesn't need plugins to shoot yourself in the foot.  You need to
know what you're doing, otherwise you are basically executing an
arbitrary shell script with whatever privileges you have.

Richard.

>
> Ian

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-08 14:04           ` Richard Biener
@ 2023-08-08 14:06             ` Siddhesh Poyarekar
  2023-08-08 14:14               ` David Edelsohn
  0 siblings, 1 reply; 72+ messages in thread
From: Siddhesh Poyarekar @ 2023-08-08 14:06 UTC (permalink / raw)
  To: Richard Biener, Ian Lance Taylor
  Cc: Jakub Jelinek, David Edelsohn, GCC Patches, Carlos O'Donell

On 2023-08-08 10:04, Richard Biener wrote:
> On Tue, Aug 8, 2023 at 3:35 PM Ian Lance Taylor <iant@google.com> wrote:
>>
>> On Tue, Aug 8, 2023 at 6:02 AM Jakub Jelinek via Gcc-patches
>> <gcc-patches@gcc.gnu.org> wrote:
>>>
>>> On Tue, Aug 08, 2023 at 02:52:57PM +0200, Richard Biener via Gcc-patches wrote:
>>>> There's probably external tools to do this, not sure if we should replicate
>>>> things in the driver for this.
>>>>
>>>> But sure, I think the driver is the proper point to address any of such
>>>> issues - iff we want to address them at all.  Maybe a nice little
>>>> google summer-of-code project ;)
>>>
>>> What I'd really like to avoid is having all compiler bugs (primarily ICEs)
>>> considered to be security bugs (e.g. DoS category), it would be terrible to
>>> release every week a new compiler because of the "security" issues.
>>> Running compiler on untrusted sources can trigger ICEs (which we want to fix
>>> but there will always be some), or run into some compile time and/or compile
>>> memory issue (we have various quadratic or worse spots), compiler stack
>>> limits (deeply nested stuff e.g. during parsing but other areas as well).
>>> So, people running fuzzers and reporting issues is great, but if they'd get
>>> a CVE assigned for each ice-on-invalid-code, ice-on-valid-code,
>>> each compile-time-hog and each memory-hog, that wouldn't be useful.
>>> Runtime libraries or security issues in the code we generate for valid
>>> sources are of course a different thing.
>>
>>
>> I wonder if a security policy should say something about the -fplugin
>> option.  I agree that an ICE is not a security issue, but I wonder how
>> many people are aware that a poorly chosen command line option can
>> direct the compiler to run arbitrary code.  For that matter the same
>> is true of setting the GCC_EXEC_PREFIX environment variable, and no
>> doubt several other environment variables.  My point is not that we
>> should change these, but that a security policy should draw attention
>> to the fact that there are cases in which the compiler will
>> unexpectedly run other programs.
> 
> Well, if you run an arbitrary commandline from the internet you get
> what you deserve, running "echo "Hello World" | gcc -xc - -o /dev/sda"
> as root doesn't need plugins to shoot yourself in the foot.  You need to
> know what you're doing, otherwise you are basically executing an
> arbitrary shell script with whatever privileges you have.

I think it would be useful to mention caveats with plugins though, just 
like it would be useful to mention exceptions for libiberty and similar 
libraries that gcc builds.  It only helps makes things clearer in terms 
of what security coverage the project provides.

Thanks,
Sid

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-08 14:06             ` Siddhesh Poyarekar
@ 2023-08-08 14:14               ` David Edelsohn
  2023-08-08 14:30                 ` Siddhesh Poyarekar
  0 siblings, 1 reply; 72+ messages in thread
From: David Edelsohn @ 2023-08-08 14:14 UTC (permalink / raw)
  To: Siddhesh Poyarekar
  Cc: Richard Biener, Ian Lance Taylor, Jakub Jelinek, GCC Patches,
	Carlos O'Donell

[-- Attachment #1: Type: text/plain, Size: 3434 bytes --]

On Tue, Aug 8, 2023 at 10:07 AM Siddhesh Poyarekar <siddhesh@gotplt.org>
wrote:

> On 2023-08-08 10:04, Richard Biener wrote:
> > On Tue, Aug 8, 2023 at 3:35 PM Ian Lance Taylor <iant@google.com> wrote:
> >>
> >> On Tue, Aug 8, 2023 at 6:02 AM Jakub Jelinek via Gcc-patches
> >> <gcc-patches@gcc.gnu.org> wrote:
> >>>
> >>> On Tue, Aug 08, 2023 at 02:52:57PM +0200, Richard Biener via
> Gcc-patches wrote:
> >>>> There's probably external tools to do this, not sure if we should
> replicate
> >>>> things in the driver for this.
> >>>>
> >>>> But sure, I think the driver is the proper point to address any of
> such
> >>>> issues - iff we want to address them at all.  Maybe a nice little
> >>>> google summer-of-code project ;)
> >>>
> >>> What I'd really like to avoid is having all compiler bugs (primarily
> ICEs)
> >>> considered to be security bugs (e.g. DoS category), it would be
> terrible to
> >>> release every week a new compiler because of the "security" issues.
> >>> Running compiler on untrusted sources can trigger ICEs (which we want
> to fix
> >>> but there will always be some), or run into some compile time and/or
> compile
> >>> memory issue (we have various quadratic or worse spots), compiler stack
> >>> limits (deeply nested stuff e.g. during parsing but other areas as
> well).
> >>> So, people running fuzzers and reporting issues is great, but if
> they'd get
> >>> a CVE assigned for each ice-on-invalid-code, ice-on-valid-code,
> >>> each compile-time-hog and each memory-hog, that wouldn't be useful.
> >>> Runtime libraries or security issues in the code we generate for valid
> >>> sources are of course a different thing.
> >>
> >>
> >> I wonder if a security policy should say something about the -fplugin
> >> option.  I agree that an ICE is not a security issue, but I wonder how
> >> many people are aware that a poorly chosen command line option can
> >> direct the compiler to run arbitrary code.  For that matter the same
> >> is true of setting the GCC_EXEC_PREFIX environment variable, and no
> >> doubt several other environment variables.  My point is not that we
> >> should change these, but that a security policy should draw attention
> >> to the fact that there are cases in which the compiler will
> >> unexpectedly run other programs.
> >
> > Well, if you run an arbitrary commandline from the internet you get
> > what you deserve, running "echo "Hello World" | gcc -xc - -o /dev/sda"
> > as root doesn't need plugins to shoot yourself in the foot.  You need to
> > know what you're doing, otherwise you are basically executing an
> > arbitrary shell script with whatever privileges you have.
>
> I think it would be useful to mention caveats with plugins though, just
> like it would be useful to mention exceptions for libiberty and similar
> libraries that gcc builds.  It only helps makes things clearer in terms
> of what security coverage the project provides.
>

I have added a line to the Note section in the proposed text:

    GCC and its tools provide features and options that can run arbitrary
user code (e.g., -fplugin).

I believe that the security implication already is addressed because the
program is not tricked into a direct compromise of security.

Do you have a suggestion for the language to address libgcc, libstdc++,
etc. and libiberty, libbacktrace, etc.?

Thanks, David

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-08 14:14               ` David Edelsohn
@ 2023-08-08 14:30                 ` Siddhesh Poyarekar
  2023-08-08 14:37                   ` Jakub Jelinek
  2023-08-09 17:32                   ` Siddhesh Poyarekar
  0 siblings, 2 replies; 72+ messages in thread
From: Siddhesh Poyarekar @ 2023-08-08 14:30 UTC (permalink / raw)
  To: David Edelsohn
  Cc: Richard Biener, Ian Lance Taylor, Jakub Jelinek, GCC Patches,
	Carlos O'Donell

On 2023-08-08 10:14, David Edelsohn wrote:
> On Tue, Aug 8, 2023 at 10:07 AM Siddhesh Poyarekar <siddhesh@gotplt.org 
> <mailto:siddhesh@gotplt.org>> wrote:
> 
>     On 2023-08-08 10:04, Richard Biener wrote:
>      > On Tue, Aug 8, 2023 at 3:35 PM Ian Lance Taylor <iant@google.com
>     <mailto:iant@google.com>> wrote:
>      >>
>      >> On Tue, Aug 8, 2023 at 6:02 AM Jakub Jelinek via Gcc-patches
>      >> <gcc-patches@gcc.gnu.org <mailto:gcc-patches@gcc.gnu.org>> wrote:
>      >>>
>      >>> On Tue, Aug 08, 2023 at 02:52:57PM +0200, Richard Biener via
>     Gcc-patches wrote:
>      >>>> There's probably external tools to do this, not sure if we
>     should replicate
>      >>>> things in the driver for this.
>      >>>>
>      >>>> But sure, I think the driver is the proper point to address
>     any of such
>      >>>> issues - iff we want to address them at all.  Maybe a nice little
>      >>>> google summer-of-code project ;)
>      >>>
>      >>> What I'd really like to avoid is having all compiler bugs
>     (primarily ICEs)
>      >>> considered to be security bugs (e.g. DoS category), it would be
>     terrible to
>      >>> release every week a new compiler because of the "security" issues.
>      >>> Running compiler on untrusted sources can trigger ICEs (which
>     we want to fix
>      >>> but there will always be some), or run into some compile time
>     and/or compile
>      >>> memory issue (we have various quadratic or worse spots),
>     compiler stack
>      >>> limits (deeply nested stuff e.g. during parsing but other areas
>     as well).
>      >>> So, people running fuzzers and reporting issues is great, but
>     if they'd get
>      >>> a CVE assigned for each ice-on-invalid-code, ice-on-valid-code,
>      >>> each compile-time-hog and each memory-hog, that wouldn't be useful.
>      >>> Runtime libraries or security issues in the code we generate
>     for valid
>      >>> sources are of course a different thing.
>      >>
>      >>
>      >> I wonder if a security policy should say something about the
>     -fplugin
>      >> option.  I agree that an ICE is not a security issue, but I
>     wonder how
>      >> many people are aware that a poorly chosen command line option can
>      >> direct the compiler to run arbitrary code.  For that matter the same
>      >> is true of setting the GCC_EXEC_PREFIX environment variable, and no
>      >> doubt several other environment variables.  My point is not that we
>      >> should change these, but that a security policy should draw
>     attention
>      >> to the fact that there are cases in which the compiler will
>      >> unexpectedly run other programs.
>      >
>      > Well, if you run an arbitrary commandline from the internet you get
>      > what you deserve, running "echo "Hello World" | gcc -xc - -o
>     /dev/sda"
>      > as root doesn't need plugins to shoot yourself in the foot.  You
>     need to
>      > know what you're doing, otherwise you are basically executing an
>      > arbitrary shell script with whatever privileges you have.
> 
>     I think it would be useful to mention caveats with plugins though, just
>     like it would be useful to mention exceptions for libiberty and similar
>     libraries that gcc builds.  It only helps makes things clearer in terms
>     of what security coverage the project provides.
> 
> 
> I have added a line to the Note section in the proposed text:
> 
>      GCC and its tools provide features and options that can run 
> arbitrary user code (e.g., -fplugin).

How about the following to make it clearer that arbitrary code in 
plugins is not considered secure:

GCC and its tools provide features and options that can run arbitrary 
user code, e.g. using the -fplugin options.  Such custom code should be 
vetted by the user for safety as bugs exposed through such code will not 
be considered security issues.

> I believe that the security implication already is addressed because the 
> program is not tricked into a direct compromise of security.
> 
> Do you have a suggestion for the language to address libgcc, libstdc++, 
> etc. and libiberty, libbacktrace, etc.?

I'll work on this a bit and share a draft.

Thanks,
Sid

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-08 14:30                 ` Siddhesh Poyarekar
@ 2023-08-08 14:37                   ` Jakub Jelinek
  2023-08-08 14:40                     ` Siddhesh Poyarekar
  2023-08-08 17:35                     ` Ian Lance Taylor
  2023-08-09 17:32                   ` Siddhesh Poyarekar
  1 sibling, 2 replies; 72+ messages in thread
From: Jakub Jelinek @ 2023-08-08 14:37 UTC (permalink / raw)
  To: Siddhesh Poyarekar
  Cc: David Edelsohn, Richard Biener, Ian Lance Taylor, GCC Patches,
	Carlos O'Donell

On Tue, Aug 08, 2023 at 10:30:10AM -0400, Siddhesh Poyarekar wrote:
> > Do you have a suggestion for the language to address libgcc, libstdc++,
> > etc. and libiberty, libbacktrace, etc.?
> 
> I'll work on this a bit and share a draft.

BTW, I think we should perhaps differentiate between production ready
libraries (e.g. libgcc, libstdc++, libgomp, libatomic, libgfortran, libquadmath,
libssp) vs. e.g. the sanitizer libraries which are meant for debugging and
I believe it is highly risky to run them in programs with extra priviledges
- e.g. I think they use getenv rather than *secure_getenv to get at various
tweaks for their behavior including where logging will happen and upstream
doesn't really care.
And not really sure what to say about lesser used language support
libraries, libada, libphobos, libgo, libgm2, ... nor what to say about
libvtv etc.

	Jakub


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-08 14:37                   ` Jakub Jelinek
@ 2023-08-08 14:40                     ` Siddhesh Poyarekar
  2023-08-08 16:22                       ` Richard Earnshaw (lists)
  2023-08-08 17:35                     ` Ian Lance Taylor
  1 sibling, 1 reply; 72+ messages in thread
From: Siddhesh Poyarekar @ 2023-08-08 14:40 UTC (permalink / raw)
  To: Jakub Jelinek
  Cc: David Edelsohn, Richard Biener, Ian Lance Taylor, GCC Patches,
	Carlos O'Donell

On 2023-08-08 10:37, Jakub Jelinek wrote:
> On Tue, Aug 08, 2023 at 10:30:10AM -0400, Siddhesh Poyarekar wrote:
>>> Do you have a suggestion for the language to address libgcc, libstdc++,
>>> etc. and libiberty, libbacktrace, etc.?
>>
>> I'll work on this a bit and share a draft.
> 
> BTW, I think we should perhaps differentiate between production ready
> libraries (e.g. libgcc, libstdc++, libgomp, libatomic, libgfortran, libquadmath,
> libssp) vs. e.g. the sanitizer libraries which are meant for debugging and

Agreed, that's why I need some time to sort all of the libraries gcc 
builds to categorize them into various levels of support in terms of 
safety re. untrusted input.

Thanks,
Sid

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-08 13:33         ` Paul Koning
@ 2023-08-08 15:48           ` David Malcolm
  2023-08-08 15:55             ` Siddhesh Poyarekar
  2023-08-08 20:02             ` Joseph Myers
  0 siblings, 2 replies; 72+ messages in thread
From: David Malcolm @ 2023-08-08 15:48 UTC (permalink / raw)
  To: Paul Koning, Jakub Jelinek, Andrea Corallo
  Cc: Richard Biener, Siddhesh Poyarekar, David Edelsohn, GCC Patches,
	Carlos O'Donell

On Tue, 2023-08-08 at 09:33 -0400, Paul Koning via Gcc-patches wrote:
> 
> 
> > On Aug 8, 2023, at 9:01 AM, Jakub Jelinek via Gcc-patches
> > <gcc-patches@gcc.gnu.org> wrote:
> > 
> > On Tue, Aug 08, 2023 at 02:52:57PM +0200, Richard Biener via Gcc-
> > patches wrote:
> > > There's probably external tools to do this, not sure if we should
> > > replicate
> > > things in the driver for this.
> > > 
> > > But sure, I think the driver is the proper point to address any
> > > of such
> > > issues - iff we want to address them at all.  Maybe a nice little
> > > google summer-of-code project ;)
> > 
> > What I'd really like to avoid is having all compiler bugs
> > (primarily ICEs)
> > considered to be security bugs (e.g. DoS category), it would be
> > terrible to
> > release every week a new compiler because of the "security" issues.
> 
> Indeed.  But my answer would be that such things are not DoS issues. 
> DoS means that an external input, over which you have little control,
> is impairing service.  In the case of a compiler, if feeding it bad
> source code X.c causes it to crash, the answer is "well, then don't
> do that".

Agreed.

I'm not sure how to "wordsmith" this, but it seems like the sources and
options on the *host* are assumed to be trusted, and that the act of
*compiling* source on the host requires trusting them, just like the
act of executing the compiled code on the target does.  Though users
may be more familiar with sandboxing the target than the host.

We should spell this out further for libgccjit: libgccjit allows for
ahead-of-time and JIT compilation of sources - but it assumes that
those sources (and the compilation options) are trusted.

[Adding Andrea Corallo to the addressees]

For example, Emacs is using libgccjit to do ahead-of-time compilation
of Emacs bytecode.  I'm assuming that Emacs is assuming that its
bytecode is trusted, and that there isn't any attempt by Emacs to
sandbox the Emacs Lisp being processed.

However, consider a situation in which someone attempted to, say, embed
libgccjit inside a web browser to generate machine code from
JavaScript, where the JavaScript is potentially controlled by an
attacker.  I think we want to explicitly say that that if you're going
to do that, you need to put some other layer of defense in, so that
you're not blithely accepting the inputs to the compilation (sources
and options) from a potentially hostile source, where a crafted input
sources could potentially hit an ICE in the compiler and thus crash the
web browser.

Dave


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-08 15:48           ` David Malcolm
@ 2023-08-08 15:55             ` Siddhesh Poyarekar
  2023-08-08 16:35               ` Paul Koning
  2023-08-08 20:02             ` Joseph Myers
  1 sibling, 1 reply; 72+ messages in thread
From: Siddhesh Poyarekar @ 2023-08-08 15:55 UTC (permalink / raw)
  To: David Malcolm, Paul Koning, Jakub Jelinek, Andrea Corallo
  Cc: Richard Biener, David Edelsohn, GCC Patches, Carlos O'Donell

On 2023-08-08 11:48, David Malcolm wrote:
> On Tue, 2023-08-08 at 09:33 -0400, Paul Koning via Gcc-patches wrote:
>>
>>
>>> On Aug 8, 2023, at 9:01 AM, Jakub Jelinek via Gcc-patches
>>> <gcc-patches@gcc.gnu.org> wrote:
>>>
>>> On Tue, Aug 08, 2023 at 02:52:57PM +0200, Richard Biener via Gcc-
>>> patches wrote:
>>>> There's probably external tools to do this, not sure if we should
>>>> replicate
>>>> things in the driver for this.
>>>>
>>>> But sure, I think the driver is the proper point to address any
>>>> of such
>>>> issues - iff we want to address them at all.  Maybe a nice little
>>>> google summer-of-code project ;)
>>>
>>> What I'd really like to avoid is having all compiler bugs
>>> (primarily ICEs)
>>> considered to be security bugs (e.g. DoS category), it would be
>>> terrible to
>>> release every week a new compiler because of the "security" issues.
>>
>> Indeed.  But my answer would be that such things are not DoS issues.
>> DoS means that an external input, over which you have little control,
>> is impairing service.  In the case of a compiler, if feeding it bad
>> source code X.c causes it to crash, the answer is "well, then don't
>> do that".
> 
> Agreed.
> 
> I'm not sure how to "wordsmith" this, but it seems like the sources and
> options on the *host* are assumed to be trusted, and that the act of
> *compiling* source on the host requires trusting them, just like the
> act of executing the compiled code on the target does.  Though users
> may be more familiar with sandboxing the target than the host.
> 
> We should spell this out further for libgccjit: libgccjit allows for
> ahead-of-time and JIT compilation of sources - but it assumes that
> those sources (and the compilation options) are trusted.
> 
> [Adding Andrea Corallo to the addressees]
> 
> For example, Emacs is using libgccjit to do ahead-of-time compilation
> of Emacs bytecode.  I'm assuming that Emacs is assuming that its
> bytecode is trusted, and that there isn't any attempt by Emacs to
> sandbox the Emacs Lisp being processed.
> 
> However, consider a situation in which someone attempted to, say, embed
> libgccjit inside a web browser to generate machine code from
> JavaScript, where the JavaScript is potentially controlled by an
> attacker.  I think we want to explicitly say that that if you're going
> to do that, you need to put some other layer of defense in, so that
> you're not blithely accepting the inputs to the compilation (sources
> and options) from a potentially hostile source, where a crafted input
> sources could potentially hit an ICE in the compiler and thus crash the
> web browser.

+1, this is precisely the kind of thing the security policy should warn 
against and suggest using sandboxing for.  The compiler (or libgccjit) 
isn't really in a position to defend such uses, ICE or otherwise.

Thanks,
Sid

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-08 14:40                     ` Siddhesh Poyarekar
@ 2023-08-08 16:22                       ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 72+ messages in thread
From: Richard Earnshaw (lists) @ 2023-08-08 16:22 UTC (permalink / raw)
  To: Siddhesh Poyarekar, Jakub Jelinek
  Cc: David Edelsohn, Richard Biener, Ian Lance Taylor, GCC Patches,
	Carlos O'Donell

On 08/08/2023 15:40, Siddhesh Poyarekar wrote:
> On 2023-08-08 10:37, Jakub Jelinek wrote:
>> On Tue, Aug 08, 2023 at 10:30:10AM -0400, Siddhesh Poyarekar wrote:
>>>> Do you have a suggestion for the language to address libgcc, libstdc++,
>>>> etc. and libiberty, libbacktrace, etc.?
>>>
>>> I'll work on this a bit and share a draft.
>>
>> BTW, I think we should perhaps differentiate between production ready
>> libraries (e.g. libgcc, libstdc++, libgomp, libatomic, libgfortran, 
>> libquadmath,
>> libssp) vs. e.g. the sanitizer libraries which are meant for debugging 
>> and
> 
> Agreed, that's why I need some time to sort all of the libraries gcc 
> builds to categorize them into various levels of support in terms of 
> safety re. untrusted input.
> 
> Thanks,
> Sid

Related to this, our coding standards should really reflect what we 
consider good practice these days.  eg.  There are many library APIs 
around that were once considered acceptable that frankly we would be 
better uninventing.

R.

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-08 15:55             ` Siddhesh Poyarekar
@ 2023-08-08 16:35               ` Paul Koning
  0 siblings, 0 replies; 72+ messages in thread
From: Paul Koning @ 2023-08-08 16:35 UTC (permalink / raw)
  To: Siddhesh Poyarekar
  Cc: David Malcolm, Jakub Jelinek, Andrea Corallo, Richard Biener,
	David Edelsohn, GCC Patches, Carlos O'Donell



> On Aug 8, 2023, at 11:55 AM, Siddhesh Poyarekar <siddhesh@gotplt.org> wrote:
> 
> On 2023-08-08 11:48, David Malcolm wrote:
>> On Tue, 2023-08-08 at 09:33 -0400, Paul Koning via Gcc-patches wrote:
>>> 
>>> 
>>>> On Aug 8, 2023, at 9:01 AM, Jakub Jelinek via Gcc-patches
>>>> <gcc-patches@gcc.gnu.org> wrote:
>>>> 
>>>> On Tue, Aug 08, 2023 at 02:52:57PM +0200, Richard Biener via Gcc-
>>>> patches wrote:
>>>>> There's probably external tools to do this, not sure if we should
>>>>> replicate
>>>>> things in the driver for this.
>>>>> 
>>>>> But sure, I think the driver is the proper point to address any
>>>>> of such
>>>>> issues - iff we want to address them at all.  Maybe a nice little
>>>>> google summer-of-code project ;)
>>>> 
>>>> What I'd really like to avoid is having all compiler bugs
>>>> (primarily ICEs)
>>>> considered to be security bugs (e.g. DoS category), it would be
>>>> terrible to
>>>> release every week a new compiler because of the "security" issues.
>>> 
>>> Indeed.  But my answer would be that such things are not DoS issues.
>>> DoS means that an external input, over which you have little control,
>>> is impairing service.  In the case of a compiler, if feeding it bad
>>> source code X.c causes it to crash, the answer is "well, then don't
>>> do that".
>> Agreed.
>> I'm not sure how to "wordsmith" this, but it seems like the sources and
>> options on the *host* are assumed to be trusted, and that the act of
>> *compiling* source on the host requires trusting them, just like the
>> act of executing the compiled code on the target does.  Though users
>> may be more familiar with sandboxing the target than the host.
>> We should spell this out further for libgccjit: libgccjit allows for
>> ahead-of-time and JIT compilation of sources - but it assumes that
>> those sources (and the compilation options) are trusted.
>> [Adding Andrea Corallo to the addressees]
>> For example, Emacs is using libgccjit to do ahead-of-time compilation
>> of Emacs bytecode.  I'm assuming that Emacs is assuming that its
>> bytecode is trusted, and that there isn't any attempt by Emacs to
>> sandbox the Emacs Lisp being processed.
>> However, consider a situation in which someone attempted to, say, embed
>> libgccjit inside a web browser to generate machine code from
>> JavaScript, where the JavaScript is potentially controlled by an
>> attacker.  I think we want to explicitly say that that if you're going
>> to do that, you need to put some other layer of defense in, so that
>> you're not blithely accepting the inputs to the compilation (sources
>> and options) from a potentially hostile source, where a crafted input
>> sources could potentially hit an ICE in the compiler and thus crash the
>> web browser.
> 
> +1, this is precisely the kind of thing the security policy should warn against and suggest using sandboxing for.  The compiler (or libgccjit) isn't really in a position to defend such uses, ICE or otherwise.

I agree somewhat.  But only somewhat, because the compiler's job is not to crash even if presented with bad inputs.  An ICE is a bug, which of course we've always accepted.  But as several have agreed, it's not a DoS bug, therefore not a security bug.

The scenario of the web browser is a valid one, and I would use it to illustrate a general point, which is redundancy in safety measures. If inputs come from possibly hostile sources, it's sound practice to have multiple layers of protection.  The consuming software should be robust so it doesn't fail when subjected to bad inputs.  But additional layers of protection in case there is a defect in the first layer are valuable, and sandboxing or the like (chroot, for example) can provide that additional defense.  This isn't really a GCC issue but rather a general principle of prudence.

	paul


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-08 14:37                   ` Jakub Jelinek
  2023-08-08 14:40                     ` Siddhesh Poyarekar
@ 2023-08-08 17:35                     ` Ian Lance Taylor
  2023-08-08 17:46                       ` David Edelsohn
  1 sibling, 1 reply; 72+ messages in thread
From: Ian Lance Taylor @ 2023-08-08 17:35 UTC (permalink / raw)
  To: Jakub Jelinek
  Cc: Siddhesh Poyarekar, David Edelsohn, Richard Biener, GCC Patches,
	Carlos O'Donell

On Tue, Aug 8, 2023 at 7:37 AM Jakub Jelinek <jakub@redhat.com> wrote:
>
> BTW, I think we should perhaps differentiate between production ready
> libraries (e.g. libgcc, libstdc++, libgomp, libatomic, libgfortran, libquadmath,
> libssp) vs. e.g. the sanitizer libraries which are meant for debugging and
> I believe it is highly risky to run them in programs with extra priviledges
> - e.g. I think they use getenv rather than *secure_getenv to get at various
> tweaks for their behavior including where logging will happen and upstream
> doesn't really care.
> And not really sure what to say about lesser used language support
> libraries, libada, libphobos, libgo, libgm2, ... nor what to say about
> libvtv etc.

libgo is a complicated case because it has a lot of components
including a web server with TLS support, so there are a lot of
potential security issues for programs that use libgo.  The upstream
security policy is https://go.dev/security/policy.  I'm not sure what
to say about libgo in GCC, since realistically the support for
security problems is best-effort.  I guess we should at least accept
security reports, even if we can't promise to fix them quickly.

Ian

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-08 17:35                     ` Ian Lance Taylor
@ 2023-08-08 17:46                       ` David Edelsohn
  2023-08-08 19:39                         ` Carlos O'Donell
  0 siblings, 1 reply; 72+ messages in thread
From: David Edelsohn @ 2023-08-08 17:46 UTC (permalink / raw)
  To: Ian Lance Taylor
  Cc: Jakub Jelinek, Siddhesh Poyarekar, Richard Biener, GCC Patches,
	Carlos O'Donell

[-- Attachment #1: Type: text/plain, Size: 1627 bytes --]

On Tue, Aug 8, 2023 at 1:36 PM Ian Lance Taylor <iant@google.com> wrote:

> On Tue, Aug 8, 2023 at 7:37 AM Jakub Jelinek <jakub@redhat.com> wrote:
> >
> > BTW, I think we should perhaps differentiate between production ready
> > libraries (e.g. libgcc, libstdc++, libgomp, libatomic, libgfortran,
> libquadmath,
> > libssp) vs. e.g. the sanitizer libraries which are meant for debugging
> and
> > I believe it is highly risky to run them in programs with extra
> priviledges
> > - e.g. I think they use getenv rather than *secure_getenv to get at
> various
> > tweaks for their behavior including where logging will happen and
> upstream
> > doesn't really care.
> > And not really sure what to say about lesser used language support
> > libraries, libada, libphobos, libgo, libgm2, ... nor what to say about
> > libvtv etc.
>
> libgo is a complicated case because it has a lot of components
> including a web server with TLS support, so there are a lot of
> potential security issues for programs that use libgo.  The upstream
> security policy is https://go.dev/security/policy.  I'm not sure what
> to say about libgo in GCC, since realistically the support for
> security problems is best-effort.  I guess we should at least accept
> security reports, even if we can't promise to fix them quickly.
>

 I believe that upstream projects for components that are imported into GCC
should be responsible for their security policy, including libgo,
gofrontend, libsanitizer (other than local patches), zlib, libtool,
libphobos, libcody, libffi, eventually Rust libcore, etc.

Thanks, David

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-08 17:46                       ` David Edelsohn
@ 2023-08-08 19:39                         ` Carlos O'Donell
  2023-08-09 13:25                           ` Richard Earnshaw (lists)
  0 siblings, 1 reply; 72+ messages in thread
From: Carlos O'Donell @ 2023-08-08 19:39 UTC (permalink / raw)
  To: David Edelsohn, Ian Lance Taylor
  Cc: Jakub Jelinek, Siddhesh Poyarekar, Richard Biener, GCC Patches

On 8/8/23 13:46, David Edelsohn wrote:
> I believe that upstream projects for components that are imported
> into GCC should be responsible for their security policy, including
> libgo, gofrontend, libsanitizer (other than local patches), zlib,
> libtool, libphobos, libcody, libffi, eventually Rust libcore, etc.

I agree completely.

We can reference the upstream and direct people to follow upstream security
policy for these bundled components.

Any other policy risks having conflicting guidance between the projects,
which is not useful for security policy.

There might be exceptions to this rule, particularly when the downstream
wants to accept particular risks while upstream does not; but none of these
components are in that case IMO.

-- 
Cheers,
Carlos.


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-08 15:48           ` David Malcolm
  2023-08-08 15:55             ` Siddhesh Poyarekar
@ 2023-08-08 20:02             ` Joseph Myers
  1 sibling, 0 replies; 72+ messages in thread
From: Joseph Myers @ 2023-08-08 20:02 UTC (permalink / raw)
  To: David Malcolm
  Cc: Paul Koning, Jakub Jelinek, Andrea Corallo, Richard Biener,
	Siddhesh Poyarekar, David Edelsohn, GCC Patches,
	Carlos O'Donell

On Tue, 8 Aug 2023, David Malcolm via Gcc-patches wrote:

> However, consider a situation in which someone attempted to, say, embed
> libgccjit inside a web browser to generate machine code from
> JavaScript, where the JavaScript is potentially controlled by an
> attacker.  I think we want to explicitly say that that if you're going
> to do that, you need to put some other layer of defense in, so that
> you're not blithely accepting the inputs to the compilation (sources
> and options) from a potentially hostile source, where a crafted input
> sources could potentially hit an ICE in the compiler and thus crash the
> web browser.

A binutils analogue of sorts: you might well want to use objdump etc. on 
untrusted input, e.g. as part of analysis of a captured malware sample.  
But if you are using binutils tools in malware analysis, you really, 
really need to do so in a heavily sandboxed environment, as the malware 
could well try to exploit any system investigating it.

-- 
Joseph S. Myers
joseph@codesourcery.com

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-08 19:39                         ` Carlos O'Donell
@ 2023-08-09 13:25                           ` Richard Earnshaw (lists)
  0 siblings, 0 replies; 72+ messages in thread
From: Richard Earnshaw (lists) @ 2023-08-09 13:25 UTC (permalink / raw)
  To: Carlos O'Donell, David Edelsohn, Ian Lance Taylor
  Cc: Jakub Jelinek, Siddhesh Poyarekar, Richard Biener, GCC Patches

On 08/08/2023 20:39, Carlos O'Donell via Gcc-patches wrote:
> On 8/8/23 13:46, David Edelsohn wrote:
>> I believe that upstream projects for components that are imported
>> into GCC should be responsible for their security policy, including
>> libgo, gofrontend, libsanitizer (other than local patches), zlib,
>> libtool, libphobos, libcody, libffi, eventually Rust libcore, etc.
> 
> I agree completely.
> 
> We can reference the upstream and direct people to follow upstream security
> policy for these bundled components.
> 
> Any other policy risks having conflicting guidance between the projects,
> which is not useful for security policy.
> 
> There might be exceptions to this rule, particularly when the downstream
> wants to accept particular risks while upstream does not; but none of these
> components are in that case IMO.
> 

I agree with that, but with one caveat.  Our policy should state what we 
  do once upstream has addressed the issue.

R.

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-08 14:30                 ` Siddhesh Poyarekar
  2023-08-08 14:37                   ` Jakub Jelinek
@ 2023-08-09 17:32                   ` Siddhesh Poyarekar
  2023-08-09 18:17                     ` David Edelsohn
                                       ` (2 more replies)
  1 sibling, 3 replies; 72+ messages in thread
From: Siddhesh Poyarekar @ 2023-08-09 17:32 UTC (permalink / raw)
  To: David Edelsohn
  Cc: Richard Biener, Ian Lance Taylor, Jakub Jelinek, GCC Patches,
	Carlos O'Donell

On 2023-08-08 10:30, Siddhesh Poyarekar wrote:
>> Do you have a suggestion for the language to address libgcc, 
>> libstdc++, etc. and libiberty, libbacktrace, etc.?
> 
> I'll work on this a bit and share a draft.

Hi David,

Here's what I came up with for different parts of GCC, including the 
runtime libraries.  Over time we may find that specific parts of runtime 
libraries simply cannot be used safely in some contexts and flag that.

Sid

"""
What is a GCC security bug?
===========================

     A security bug is one that threatens the security of a system or
     network, or might compromise the security of data stored on it.
     In the context of GCC there are multiple ways in which this might
     happen and they're detailed below.

Compiler drivers, programs, libgccjit and support libraries
-----------------------------------------------------------

     The compiler driver processes source code, invokes other programs
     such as the assembler and linker and generates the output result,
     which may be assembly code or machine code.  It is necessary that
     all source code inputs to the compiler are trusted, since it is
     impossible for the driver to validate input source code beyond
     conformance to a programming language standard.

     The GCC JIT implementation, libgccjit, is intended to be plugged
     into applications to translate input source code in the application
     context.  Limitations that apply to the compiler
     driver, apply here too in terms of sanitizing inputs, so it is
     recommended that inputs are either sanitized by an external program
     to allow only trusted, safe execution in the context of the
     application or the JIT execution context is appropriately sandboxed
     to contain the effects of any bugs in the JIT or its generated code
     to the sandboxed environment.

     Support libraries such as libiberty, libcc1 libvtv and libcpp have
     been developed separately to share code with other tools such as
     binutils and gdb.  These libraries again have similar challenges to
     compiler drivers.  While they are expected to be robust against
     arbitrary input, they should only be used with trusted inputs.

     Libraries such as zlib and libffi that bundled into GCC to build it
     will be treated the same as the compiler drivers and programs as far
     as security coverage is concerned.

     As a result, the only case for a potential security issue in all
     these cases is when it ends up generating vulnerable output for
     valid input source code.

Language runtime libraries
--------------------------

     GCC also builds and distributes libraries that are intended to be
     used widely to implement runtime support for various programming
     languages.  These include the following:

     * libada
     * libatomic
     * libbacktrace
     * libcc1
     * libcody
     * libcpp
     * libdecnumber
     * libgcc
     * libgfortran
     * libgm2
     * libgo
     * libgomp
     * libiberty
     * libitm
     * libobjc
     * libphobos
     * libquadmath
     * libssp
     * libstdc++

     These libraries are intended to be used in arbitrary contexts and as
     a result, bugs in these libraries may be evaluated for security
     impact.  However, some of these libraries, e.g. libgo, libphobos,
     etc.  are not maintained in the GCC project, due to which the GCC
     project may not be the correct point of contact for them.  You are
     encouraged to look at README files within those library directories
     to locate the canonical security contact point for those projects.

Diagnostic libraries
--------------------

     The sanitizer library bundled in GCC is intended to be used in
     diagnostic cases and not intended for use in sensitive environments.
     As a result, bugs in the sanitizer will not be considered security
     sensitive.

GCC plugins
-----------

     It should be noted that GCC may execute arbitrary code loaded by a
     user through the GCC plugin mechanism or through system preloading
     mechanism.  Such custom code should be vetted by the user for safety
     as bugs exposed through such code will not be considered security
     issues.

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-09 17:32                   ` Siddhesh Poyarekar
@ 2023-08-09 18:17                     ` David Edelsohn
  2023-08-09 20:12                       ` Siddhesh Poyarekar
  2023-08-10 18:28                     ` Richard Sandiford
  2023-08-11 15:12                     ` David Edelsohn
  2 siblings, 1 reply; 72+ messages in thread
From: David Edelsohn @ 2023-08-09 18:17 UTC (permalink / raw)
  To: Siddhesh Poyarekar
  Cc: Richard Biener, Ian Lance Taylor, Jakub Jelinek, GCC Patches,
	Carlos O'Donell

[-- Attachment #1: Type: text/plain, Size: 4991 bytes --]

On Wed, Aug 9, 2023 at 1:33 PM Siddhesh Poyarekar <siddhesh@gotplt.org>
wrote:

> On 2023-08-08 10:30, Siddhesh Poyarekar wrote:
> >> Do you have a suggestion for the language to address libgcc,
> >> libstdc++, etc. and libiberty, libbacktrace, etc.?
> >
> > I'll work on this a bit and share a draft.
>
> Hi David,
>
> Here's what I came up with for different parts of GCC, including the
> runtime libraries.  Over time we may find that specific parts of runtime
> libraries simply cannot be used safely in some contexts and flag that.
>
> Sid
>

Hi, Sid

Thanks for iterating on this.


>
> """
> What is a GCC security bug?
> ===========================
>
>      A security bug is one that threatens the security of a system or
>      network, or might compromise the security of data stored on it.
>      In the context of GCC there are multiple ways in which this might
>      happen and they're detailed below.
>
> Compiler drivers, programs, libgccjit and support libraries
> -----------------------------------------------------------
>
>      The compiler driver processes source code, invokes other programs
>      such as the assembler and linker and generates the output result,
>      which may be assembly code or machine code.  It is necessary that
>      all source code inputs to the compiler are trusted, since it is
>      impossible for the driver to validate input source code beyond
>      conformance to a programming language standard.
>
>      The GCC JIT implementation, libgccjit, is intended to be plugged
>      into applications to translate input source code in the application
>      context.  Limitations that apply to the compiler
>      driver, apply here too in terms of sanitizing inputs, so it is
>      recommended that inputs are either sanitized by an external program
>      to allow only trusted, safe execution in the context of the
>      application or the JIT execution context is appropriately sandboxed
>      to contain the effects of any bugs in the JIT or its generated code
>      to the sandboxed environment.
>
>      Support libraries such as libiberty, libcc1 libvtv and libcpp have
>      been developed separately to share code with other tools such as
>      binutils and gdb.  These libraries again have similar challenges to
>      compiler drivers.  While they are expected to be robust against
>      arbitrary input, they should only be used with trusted inputs.
>
>      Libraries such as zlib and libffi that bundled into GCC to build it
>      will be treated the same as the compiler drivers and programs as far
>      as security coverage is concerned.
>

Should we direct people to the upstream projects for their security
policies?


>      As a result, the only case for a potential security issue in all
>      these cases is when it ends up generating vulnerable output for
>      valid input source code.


> Language runtime libraries
> --------------------------
>
>      GCC also builds and distributes libraries that are intended to be
>      used widely to implement runtime support for various programming
>      languages.  These include the following:
>
>      * libada
>      * libatomic
>      * libbacktrace
>      * libcc1
>      * libcody
>      * libcpp
>      * libdecnumber
>      * libgcc
>      * libgfortran
>      * libgm2
>      * libgo
>      * libgomp
>      * libiberty
>      * libitm
>      * libobjc
>      * libphobos
>      * libquadmath
>      * libssp
>      * libstdc++
>
>      These libraries are intended to be used in arbitrary contexts and as
>      a result, bugs in these libraries may be evaluated for security
>      impact.  However, some of these libraries, e.g. libgo, libphobos,
>      etc.  are not maintained in the GCC project, due to which the GCC
>      project may not be the correct point of contact for them.  You are
>      encouraged to look at README files within those library directories
>      to locate the canonical security contact point for those projects.
>

As Richard mentioned, should GCC make a specific statement about the
security policy / response for issues that are discovered and fixed in the
upstream projects from which the GCC libraries are imported?


>
> Diagnostic libraries
> --------------------
>
>      The sanitizer library bundled in GCC is intended to be used in
>      diagnostic cases and not intended for use in sensitive environments.
>      As a result, bugs in the sanitizer will not be considered security
>      sensitive.
>
> GCC plugins
> -----------
>
>      It should be noted that GCC may execute arbitrary code loaded by a
>      user through the GCC plugin mechanism or through system preloading
>      mechanism.  Such custom code should be vetted by the user for safety
>      as bugs exposed through such code will not be considered security
>      issues.
>

Thanks, David

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-09 18:17                     ` David Edelsohn
@ 2023-08-09 20:12                       ` Siddhesh Poyarekar
  0 siblings, 0 replies; 72+ messages in thread
From: Siddhesh Poyarekar @ 2023-08-09 20:12 UTC (permalink / raw)
  To: David Edelsohn
  Cc: Richard Biener, Ian Lance Taylor, Jakub Jelinek, GCC Patches,
	Carlos O'Donell

On 2023-08-09 14:17, David Edelsohn wrote:
> On Wed, Aug 9, 2023 at 1:33 PM Siddhesh Poyarekar <siddhesh@gotplt.org 
> <mailto:siddhesh@gotplt.org>> wrote:
> 
>     On 2023-08-08 10:30, Siddhesh Poyarekar wrote:
>      >> Do you have a suggestion for the language to address libgcc,
>      >> libstdc++, etc. and libiberty, libbacktrace, etc.?
>      >
>      > I'll work on this a bit and share a draft.
> 
>     Hi David,
> 
>     Here's what I came up with for different parts of GCC, including the
>     runtime libraries.  Over time we may find that specific parts of
>     runtime
>     libraries simply cannot be used safely in some contexts and flag that.
> 
>     Sid
> 
> 
> Hi, Sid
> 
> Thanks for iterating on this.
> 
> 
>     """
>     What is a GCC security bug?
>     ===========================
> 
>           A security bug is one that threatens the security of a system or
>           network, or might compromise the security of data stored on it.
>           In the context of GCC there are multiple ways in which this might
>           happen and they're detailed below.
> 
>     Compiler drivers, programs, libgccjit and support libraries
>     -----------------------------------------------------------
> 
>           The compiler driver processes source code, invokes other programs
>           such as the assembler and linker and generates the output result,
>           which may be assembly code or machine code.  It is necessary that
>           all source code inputs to the compiler are trusted, since it is
>           impossible for the driver to validate input source code beyond
>           conformance to a programming language standard.
> 
>           The GCC JIT implementation, libgccjit, is intended to be plugged
>           into applications to translate input source code in the
>     application
>           context.  Limitations that apply to the compiler
>           driver, apply here too in terms of sanitizing inputs, so it is
>           recommended that inputs are either sanitized by an external
>     program
>           to allow only trusted, safe execution in the context of the
>           application or the JIT execution context is appropriately
>     sandboxed
>           to contain the effects of any bugs in the JIT or its generated
>     code
>           to the sandboxed environment.
> 
>           Support libraries such as libiberty, libcc1 libvtv and libcpp have
>           been developed separately to share code with other tools such as
>           binutils and gdb.  These libraries again have similar
>     challenges to
>           compiler drivers.  While they are expected to be robust against
>           arbitrary input, they should only be used with trusted inputs.
> 
>           Libraries such as zlib and libffi that bundled into GCC to
>     build it
>           will be treated the same as the compiler drivers and programs
>     as far
>           as security coverage is concerned.
> 
> 
> Should we direct people to the upstream projects for their security 
> policies?

We bundle zlib and libffi so regardless of whether it's a security issue 
in those libraries (because security impact of memory safety bugs in 
general use libraries will be context dependent and hence get assigned 
CVEs more often than not), the context in gcc is well defined as a local 
unprivileged executable and hence not security-relevant.

That said, we could add something like:

     However if you find a issue in these libraries independent of their
     use in GCC you should reach out to their upstream projects to report
     them.

> 
> 
>           As a result, the only case for a potential security issue in all
>           these cases is when it ends up generating vulnerable output for
>           valid input source code.
> 
> 
>     Language runtime libraries
>     --------------------------
> 
>           GCC also builds and distributes libraries that are intended to be
>           used widely to implement runtime support for various programming
>           languages.  These include the following:
> 
>           * libada
>           * libatomic
>           * libbacktrace
>           * libcc1
>           * libcody
>           * libcpp
>           * libdecnumber
>           * libgcc
>           * libgfortran
>           * libgm2
>           * libgo
>           * libgomp
>           * libiberty
>           * libitm
>           * libobjc
>           * libphobos
>           * libquadmath
>           * libssp
>           * libstdc++
> 
>           These libraries are intended to be used in arbitrary contexts
>     and as
>           a result, bugs in these libraries may be evaluated for security
>           impact.  However, some of these libraries, e.g. libgo, libphobos,
>           etc.  are not maintained in the GCC project, due to which the GCC
>           project may not be the correct point of contact for them.  You are
>           encouraged to look at README files within those library
>     directories
>           to locate the canonical security contact point for those projects.
> 
> 
> As Richard mentioned, should GCC make a specific statement about the 
> security policy / response for issues that are discovered and fixed in 
> the upstream projects from which the GCC libraries are imported?

Ack, how about:

     You are encouraged to reach out to us or look at the README files
     within those library directories to locate the canonical security
     contact point for those projects and include them in the security
     report.  Once the security issue is addressed upstream, the GCC
     project may sync code from upstream to resolve the issue in GCC.

> 
>     Diagnostic libraries
>     --------------------
> 
>           The sanitizer library bundled in GCC is intended to be used in
>           diagnostic cases and not intended for use in sensitive
>     environments.
>           As a result, bugs in the sanitizer will not be considered security
>           sensitive.
> 
>     GCC plugins
>     -----------
> 
>           It should be noted that GCC may execute arbitrary code loaded by a
>           user through the GCC plugin mechanism or through system preloading
>           mechanism.  Such custom code should be vetted by the user for
>     safety
>           as bugs exposed through such code will not be considered security
>           issues.
> 
> 
> Thanks, David

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-09 17:32                   ` Siddhesh Poyarekar
  2023-08-09 18:17                     ` David Edelsohn
@ 2023-08-10 18:28                     ` Richard Sandiford
  2023-08-10 18:50                       ` Siddhesh Poyarekar
  2023-08-10 19:27                       ` Richard Biener
  2023-08-11 15:12                     ` David Edelsohn
  2 siblings, 2 replies; 72+ messages in thread
From: Richard Sandiford @ 2023-08-10 18:28 UTC (permalink / raw)
  To: Siddhesh Poyarekar
  Cc: David Edelsohn, Richard Biener, Ian Lance Taylor, Jakub Jelinek,
	GCC Patches, Carlos O'Donell

Siddhesh Poyarekar <siddhesh@gotplt.org> writes:
> On 2023-08-08 10:30, Siddhesh Poyarekar wrote:
>>> Do you have a suggestion for the language to address libgcc, 
>>> libstdc++, etc. and libiberty, libbacktrace, etc.?
>> 
>> I'll work on this a bit and share a draft.
>
> Hi David,
>
> Here's what I came up with for different parts of GCC, including the 
> runtime libraries.  Over time we may find that specific parts of runtime 
> libraries simply cannot be used safely in some contexts and flag that.
>
> Sid
>
> """
> What is a GCC security bug?
> ===========================
>
>      A security bug is one that threatens the security of a system or
>      network, or might compromise the security of data stored on it.
>      In the context of GCC there are multiple ways in which this might
>      happen and they're detailed below.
>
> Compiler drivers, programs, libgccjit and support libraries
> -----------------------------------------------------------
>
>      The compiler driver processes source code, invokes other programs
>      such as the assembler and linker and generates the output result,
>      which may be assembly code or machine code.  It is necessary that
>      all source code inputs to the compiler are trusted, since it is
>      impossible for the driver to validate input source code beyond
>      conformance to a programming language standard.
>
>      The GCC JIT implementation, libgccjit, is intended to be plugged
>      into applications to translate input source code in the application
>      context.  Limitations that apply to the compiler
>      driver, apply here too in terms of sanitizing inputs, so it is
>      recommended that inputs are either sanitized by an external program
>      to allow only trusted, safe execution in the context of the
>      application or the JIT execution context is appropriately sandboxed
>      to contain the effects of any bugs in the JIT or its generated code
>      to the sandboxed environment.
>
>      Support libraries such as libiberty, libcc1 libvtv and libcpp have
>      been developed separately to share code with other tools such as
>      binutils and gdb.  These libraries again have similar challenges to
>      compiler drivers.  While they are expected to be robust against
>      arbitrary input, they should only be used with trusted inputs.
>
>      Libraries such as zlib and libffi that bundled into GCC to build it
>      will be treated the same as the compiler drivers and programs as far
>      as security coverage is concerned.
>
>      As a result, the only case for a potential security issue in all
>      these cases is when it ends up generating vulnerable output for
>      valid input source code.

I think this leaves open the interpretation "every wrong code bug
is potentially a security bug".  I suppose that's true in a trite sense,
but not in a useful sense.  As others said earlier in the thread,
whether a wrong code bug in GCC leads to a security bug in the object
code is too application-dependent to be a useful classification for GCC.

I think we should explicitly say that we don't generally consider wrong
code bugs to be security bugs.  Leaving it implicit is bound to lead
to misunderstanding.

There's another case that I think should be highlighted explicitly:
GCC provides various security-hardening features.  I think any failure
of those feature to act as documented is poentially a security bug.
Failure to follow reasonable expectations (even if not documented)
might sometimes be a security bug too.

Thanks,
Richard
>
> Language runtime libraries
> --------------------------
>
>      GCC also builds and distributes libraries that are intended to be
>      used widely to implement runtime support for various programming
>      languages.  These include the following:
>
>      * libada
>      * libatomic
>      * libbacktrace
>      * libcc1
>      * libcody
>      * libcpp
>      * libdecnumber
>      * libgcc
>      * libgfortran
>      * libgm2
>      * libgo
>      * libgomp
>      * libiberty
>      * libitm
>      * libobjc
>      * libphobos
>      * libquadmath
>      * libssp
>      * libstdc++
>
>      These libraries are intended to be used in arbitrary contexts and as
>      a result, bugs in these libraries may be evaluated for security
>      impact.  However, some of these libraries, e.g. libgo, libphobos,
>      etc.  are not maintained in the GCC project, due to which the GCC
>      project may not be the correct point of contact for them.  You are
>      encouraged to look at README files within those library directories
>      to locate the canonical security contact point for those projects.
>
> Diagnostic libraries
> --------------------
>
>      The sanitizer library bundled in GCC is intended to be used in
>      diagnostic cases and not intended for use in sensitive environments.
>      As a result, bugs in the sanitizer will not be considered security
>      sensitive.
>
> GCC plugins
> -----------
>
>      It should be noted that GCC may execute arbitrary code loaded by a
>      user through the GCC plugin mechanism or through system preloading
>      mechanism.  Such custom code should be vetted by the user for safety
>      as bugs exposed through such code will not be considered security
>      issues.

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-10 18:28                     ` Richard Sandiford
@ 2023-08-10 18:50                       ` Siddhesh Poyarekar
  2023-08-11 14:36                         ` Siddhesh Poyarekar
  2023-08-10 19:27                       ` Richard Biener
  1 sibling, 1 reply; 72+ messages in thread
From: Siddhesh Poyarekar @ 2023-08-10 18:50 UTC (permalink / raw)
  To: David Edelsohn, Richard Biener, Ian Lance Taylor, Jakub Jelinek,
	GCC Patches, Carlos O'Donell, richard.sandiford

On 2023-08-10 14:28, Richard Sandiford wrote:
> Siddhesh Poyarekar <siddhesh@gotplt.org> writes:
>> On 2023-08-08 10:30, Siddhesh Poyarekar wrote:
>>>> Do you have a suggestion for the language to address libgcc,
>>>> libstdc++, etc. and libiberty, libbacktrace, etc.?
>>>
>>> I'll work on this a bit and share a draft.
>>
>> Hi David,
>>
>> Here's what I came up with for different parts of GCC, including the
>> runtime libraries.  Over time we may find that specific parts of runtime
>> libraries simply cannot be used safely in some contexts and flag that.
>>
>> Sid
>>
>> """
>> What is a GCC security bug?
>> ===========================
>>
>>       A security bug is one that threatens the security of a system or
>>       network, or might compromise the security of data stored on it.
>>       In the context of GCC there are multiple ways in which this might
>>       happen and they're detailed below.
>>
>> Compiler drivers, programs, libgccjit and support libraries
>> -----------------------------------------------------------
>>
>>       The compiler driver processes source code, invokes other programs
>>       such as the assembler and linker and generates the output result,
>>       which may be assembly code or machine code.  It is necessary that
>>       all source code inputs to the compiler are trusted, since it is
>>       impossible for the driver to validate input source code beyond
>>       conformance to a programming language standard.
>>
>>       The GCC JIT implementation, libgccjit, is intended to be plugged
>>       into applications to translate input source code in the application
>>       context.  Limitations that apply to the compiler
>>       driver, apply here too in terms of sanitizing inputs, so it is
>>       recommended that inputs are either sanitized by an external program
>>       to allow only trusted, safe execution in the context of the
>>       application or the JIT execution context is appropriately sandboxed
>>       to contain the effects of any bugs in the JIT or its generated code
>>       to the sandboxed environment.
>>
>>       Support libraries such as libiberty, libcc1 libvtv and libcpp have
>>       been developed separately to share code with other tools such as
>>       binutils and gdb.  These libraries again have similar challenges to
>>       compiler drivers.  While they are expected to be robust against
>>       arbitrary input, they should only be used with trusted inputs.
>>
>>       Libraries such as zlib and libffi that bundled into GCC to build it
>>       will be treated the same as the compiler drivers and programs as far
>>       as security coverage is concerned.
>>
>>       As a result, the only case for a potential security issue in all
>>       these cases is when it ends up generating vulnerable output for
>>       valid input source code.
> 
> I think this leaves open the interpretation "every wrong code bug
> is potentially a security bug".  I suppose that's true in a trite sense,
> but not in a useful sense.  As others said earlier in the thread,
> whether a wrong code bug in GCC leads to a security bug in the object
> code is too application-dependent to be a useful classification for GCC.
> 
> I think we should explicitly say that we don't generally consider wrong
> code bugs to be security bugs.  Leaving it implicit is bound to lead
> to misunderstanding.

I see what you mean, but the context-dependence of a bug is something 
GCC will have to deal with, similar to how libraries have to deal with 
bugs.  But I agree this probably needs some more expansion.  Let me try 
and come up with something more detailed for that last paragraph.

> There's another case that I think should be highlighted explicitly:
> GCC provides various security-hardening features.  I think any failure
> of those feature to act as documented is poentially a security bug.
> Failure to follow reasonable expectations (even if not documented)
> might sometimes be a security bug too.

Missed hardening in general does not put systems at immediate risk, so 
they're not considered CVE-worthy.  In fact when bugs are evaluated for 
security risk at a source level (e.g. when NIST does it), hardening does 
not come into the picture at all.  It's only at product levels that 
hardening features are accounted for, e.g. where -fstack-protector would 
reduce the seriousness of a stack buffer overflow and even there one 
must do an analysis to see if the generated code actually mitigated the 
overflow using the stack protector canary.

Thanks,
Sid

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-10 18:28                     ` Richard Sandiford
  2023-08-10 18:50                       ` Siddhesh Poyarekar
@ 2023-08-10 19:27                       ` Richard Biener
  1 sibling, 0 replies; 72+ messages in thread
From: Richard Biener @ 2023-08-10 19:27 UTC (permalink / raw)
  To: Richard Sandiford
  Cc: Siddhesh Poyarekar, David Edelsohn, Ian Lance Taylor,
	Jakub Jelinek, GCC Patches, Carlos O'Donell



> Am 10.08.2023 um 20:28 schrieb Richard Sandiford <richard.sandiford@arm.com>:
> 
> Siddhesh Poyarekar <siddhesh@gotplt.org> writes:
>> On 2023-08-08 10:30, Siddhesh Poyarekar wrote:
>>>> Do you have a suggestion for the language to address libgcc, 
>>>> libstdc++, etc. and libiberty, libbacktrace, etc.?
>>> 
>>> I'll work on this a bit and share a draft.
>> 
>> Hi David,
>> 
>> Here's what I came up with for different parts of GCC, including the 
>> runtime libraries.  Over time we may find that specific parts of runtime 
>> libraries simply cannot be used safely in some contexts and flag that.
>> 
>> Sid
>> 
>> """
>> What is a GCC security bug?
>> ===========================
>> 
>>     A security bug is one that threatens the security of a system or
>>     network, or might compromise the security of data stored on it.
>>     In the context of GCC there are multiple ways in which this might
>>     happen and they're detailed below.
>> 
>> Compiler drivers, programs, libgccjit and support libraries
>> -----------------------------------------------------------
>> 
>>     The compiler driver processes source code, invokes other programs
>>     such as the assembler and linker and generates the output result,
>>     which may be assembly code or machine code.  It is necessary that
>>     all source code inputs to the compiler are trusted, since it is
>>     impossible for the driver to validate input source code beyond
>>     conformance to a programming language standard.
>> 
>>     The GCC JIT implementation, libgccjit, is intended to be plugged
>>     into applications to translate input source code in the application
>>     context.  Limitations that apply to the compiler
>>     driver, apply here too in terms of sanitizing inputs, so it is
>>     recommended that inputs are either sanitized by an external program
>>     to allow only trusted, safe execution in the context of the
>>     application or the JIT execution context is appropriately sandboxed
>>     to contain the effects of any bugs in the JIT or its generated code
>>     to the sandboxed environment.
>> 
>>     Support libraries such as libiberty, libcc1 libvtv and libcpp have
>>     been developed separately to share code with other tools such as
>>     binutils and gdb.  These libraries again have similar challenges to
>>     compiler drivers.  While they are expected to be robust against
>>     arbitrary input, they should only be used with trusted inputs.
>> 
>>     Libraries such as zlib and libffi that bundled into GCC to build it
>>     will be treated the same as the compiler drivers and programs as far
>>     as security coverage is concerned.
>> 
>>     As a result, the only case for a potential security issue in all
>>     these cases is when it ends up generating vulnerable output for
>>     valid input source code.
> 
> I think this leaves open the interpretation "every wrong code bug
> is potentially a security bug".  I suppose that's true in a trite sense,
> but not in a useful sense.  As others said earlier in the thread,
> whether a wrong code bug in GCC leads to a security bug in the object
> code is too application-dependent to be a useful classification for GCC.
> 
> I think we should explicitly say that we don't generally consider wrong
> code bugs to be security bugs.  Leaving it implicit is bound to lead
> to misunderstanding.

In some sense the security bug is never in GCC itself but the consumer which is what you need to be able to exploit

Richard 


> There's another case that I think should be highlighted explicitly:
> GCC provides various security-hardening features.  I think any failure
> of those feature to act as documented is poentially a security bug.
> Failure to follow reasonable expectations (even if not documented)
> might sometimes be a security bug too.
> 
> Thanks,
> Richard
>> 
>> Language runtime libraries
>> --------------------------
>> 
>>     GCC also builds and distributes libraries that are intended to be
>>     used widely to implement runtime support for various programming
>>     languages.  These include the following:
>> 
>>     * libada
>>     * libatomic
>>     * libbacktrace
>>     * libcc1
>>     * libcody
>>     * libcpp
>>     * libdecnumber
>>     * libgcc
>>     * libgfortran
>>     * libgm2
>>     * libgo
>>     * libgomp
>>     * libiberty
>>     * libitm
>>     * libobjc
>>     * libphobos
>>     * libquadmath
>>     * libssp
>>     * libstdc++
>> 
>>     These libraries are intended to be used in arbitrary contexts and as
>>     a result, bugs in these libraries may be evaluated for security
>>     impact.  However, some of these libraries, e.g. libgo, libphobos,
>>     etc.  are not maintained in the GCC project, due to which the GCC
>>     project may not be the correct point of contact for them.  You are
>>     encouraged to look at README files within those library directories
>>     to locate the canonical security contact point for those projects.
>> 
>> Diagnostic libraries
>> --------------------
>> 
>>     The sanitizer library bundled in GCC is intended to be used in
>>     diagnostic cases and not intended for use in sensitive environments.
>>     As a result, bugs in the sanitizer will not be considered security
>>     sensitive.
>> 
>> GCC plugins
>> -----------
>> 
>>     It should be noted that GCC may execute arbitrary code loaded by a
>>     user through the GCC plugin mechanism or through system preloading
>>     mechanism.  Such custom code should be vetted by the user for safety
>>     as bugs exposed through such code will not be considered security
>>     issues.

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-10 18:50                       ` Siddhesh Poyarekar
@ 2023-08-11 14:36                         ` Siddhesh Poyarekar
  2023-08-11 15:09                           ` Paul Koning
  0 siblings, 1 reply; 72+ messages in thread
From: Siddhesh Poyarekar @ 2023-08-11 14:36 UTC (permalink / raw)
  To: David Edelsohn, Richard Biener, Ian Lance Taylor, Jakub Jelinek,
	GCC Patches, Carlos O'Donell, richard.sandiford

On 2023-08-10 14:50, Siddhesh Poyarekar wrote:
>>>       As a result, the only case for a potential security issue in all
>>>       these cases is when it ends up generating vulnerable output for
>>>       valid input source code.
>>
>> I think this leaves open the interpretation "every wrong code bug
>> is potentially a security bug".  I suppose that's true in a trite sense,
>> but not in a useful sense.  As others said earlier in the thread,
>> whether a wrong code bug in GCC leads to a security bug in the object
>> code is too application-dependent to be a useful classification for GCC.
>>
>> I think we should explicitly say that we don't generally consider wrong
>> code bugs to be security bugs.  Leaving it implicit is bound to lead
>> to misunderstanding.
> 
> I see what you mean, but the context-dependence of a bug is something 
> GCC will have to deal with, similar to how libraries have to deal with 
> bugs.  But I agree this probably needs some more expansion.  Let me try 
> and come up with something more detailed for that last paragraph.

How's this:

As a result, the only case for a potential security issue in the 
compiler is when it generates vulnerable application code for valid, 
trusted input source code.  The output application code could be 
considered vulnerable if it produces an actual vulnerability in the 
target application, specifically in the following cases:

- The application dereferences an invalid memory location despite the 
application sources being valid.

- The application reads from or writes to a valid but incorrect memory 
location, resulting in an information integrity issue or an information 
leak.

- The application ends up running in an infinite loop or with severe 
degradation in performance despite the input sources having no such 
issue, resulting in a Denial of Service.  Note that correct but 
non-performant code is not a security issue candidate, this only applies 
to incorrect code that may result in performance degradation.

- The application crashes due to the generated incorrect code, resulting 
in a Denial of Service.


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-11 14:36                         ` Siddhesh Poyarekar
@ 2023-08-11 15:09                           ` Paul Koning
  2023-08-11 15:20                             ` Siddhesh Poyarekar
  0 siblings, 1 reply; 72+ messages in thread
From: Paul Koning @ 2023-08-11 15:09 UTC (permalink / raw)
  To: Siddhesh Poyarekar
  Cc: David Edelsohn, Richard Biener, Ian Lance Taylor, Jakub Jelinek,
	GCC Patches, Carlos O'Donell, richard.sandiford



> On Aug 11, 2023, at 10:36 AM, Siddhesh Poyarekar <siddhesh@gotplt.org> wrote:
> 
> On 2023-08-10 14:50, Siddhesh Poyarekar wrote:
>>>>       As a result, the only case for a potential security issue in all
>>>>       these cases is when it ends up generating vulnerable output for
>>>>       valid input source code.
>>> 
>>> I think this leaves open the interpretation "every wrong code bug
>>> is potentially a security bug".  I suppose that's true in a trite sense,
>>> but not in a useful sense.  As others said earlier in the thread,
>>> whether a wrong code bug in GCC leads to a security bug in the object
>>> code is too application-dependent to be a useful classification for GCC.
>>> 
>>> I think we should explicitly say that we don't generally consider wrong
>>> code bugs to be security bugs.  Leaving it implicit is bound to lead
>>> to misunderstanding.
>> I see what you mean, but the context-dependence of a bug is something GCC will have to deal with, similar to how libraries have to deal with bugs.  But I agree this probably needs some more expansion.  Let me try and come up with something more detailed for that last paragraph.
> 
> How's this:
> 
> As a result, the only case for a potential security issue in the compiler is when it generates vulnerable application code for valid, trusted input source code.  The output application code could be considered vulnerable if it produces an actual vulnerability in the target application, specifically in the following cases:

You might make it explicit that we're talking about wrong code errors here -- in other words, the source code is correct (conforms to the standard) and the algorithm expressed in the source code does not have a vulnerability, but the generated code has semantics that differ from those of the source code such that it does have a vulnerability.

> - The application dereferences an invalid memory location despite the application sources being valid.
> 
> - The application reads from or writes to a valid but incorrect memory location, resulting in an information integrity issue or an information leak.
> 
> - The application ends up running in an infinite loop or with severe degradation in performance despite the input sources having no such issue, resulting in a Denial of Service.  Note that correct but non-performant code is not a security issue candidate, this only applies to incorrect code that may result in performance degradation.

The last sentence somewhat contradicts the preceding one.  Perhaps "...may result in performance degradation severe enough to amount to a denial of service".

> - The application crashes due to the generated incorrect code, resulting in a Denial of Service.

	paul


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-09 17:32                   ` Siddhesh Poyarekar
  2023-08-09 18:17                     ` David Edelsohn
  2023-08-10 18:28                     ` Richard Sandiford
@ 2023-08-11 15:12                     ` David Edelsohn
  2023-08-11 15:22                       ` Siddhesh Poyarekar
  2 siblings, 1 reply; 72+ messages in thread
From: David Edelsohn @ 2023-08-11 15:12 UTC (permalink / raw)
  To: Siddhesh Poyarekar
  Cc: Richard Biener, Ian Lance Taylor, Jakub Jelinek, GCC Patches,
	Carlos O'Donell

[-- Attachment #1: Type: text/plain, Size: 5393 bytes --]

On Wed, Aug 9, 2023 at 1:33 PM Siddhesh Poyarekar <siddhesh@gotplt.org>
wrote:

> On 2023-08-08 10:30, Siddhesh Poyarekar wrote:
> >> Do you have a suggestion for the language to address libgcc,
> >> libstdc++, etc. and libiberty, libbacktrace, etc.?
> >
> > I'll work on this a bit and share a draft.
>
> Hi David,
>
> Here's what I came up with for different parts of GCC, including the
> runtime libraries.  Over time we may find that specific parts of runtime
> libraries simply cannot be used safely in some contexts and flag that.
>
> Sid
>
> """
> What is a GCC security bug?
> ===========================
>
>      A security bug is one that threatens the security of a system or
>      network, or might compromise the security of data stored on it.
>      In the context of GCC there are multiple ways in which this might
>      happen and they're detailed below.
>
> Compiler drivers, programs, libgccjit and support libraries
> -----------------------------------------------------------
>
>      The compiler driver processes source code, invokes other programs
>      such as the assembler and linker and generates the output result,
>      which may be assembly code or machine code.  It is necessary that
>      all source code inputs to the compiler are trusted, since it is
>      impossible for the driver to validate input source code beyond
>      conformance to a programming language standard.
>
>      The GCC JIT implementation, libgccjit, is intended to be plugged
>      into applications to translate input source code in the application
>      context.  Limitations that apply to the compiler
>      driver, apply here too in terms of sanitizing inputs, so it is
>      recommended that inputs are either sanitized by an external program
>      to allow only trusted, safe execution in the context of the
>      application or the JIT execution context is appropriately sandboxed
>      to contain the effects of any bugs in the JIT or its generated code
>      to the sandboxed environment.
>
>      Support libraries such as libiberty, libcc1 libvtv and libcpp have
>      been developed separately to share code with other tools such as
>      binutils and gdb.  These libraries again have similar challenges to
>      compiler drivers.  While they are expected to be robust against
>      arbitrary input, they should only be used with trusted inputs.
>
>      Libraries such as zlib and libffi that bundled into GCC to build it
>      will be treated the same as the compiler drivers and programs as far
>      as security coverage is concerned.
>
>      As a result, the only case for a potential security issue in all
>      these cases is when it ends up generating vulnerable output for
>      valid input source code.
>
> Language runtime libraries
> --------------------------
>
>      GCC also builds and distributes libraries that are intended to be
>      used widely to implement runtime support for various programming
>      languages.  These include the following:
>
>      * libada
>      * libatomic
>      * libbacktrace
>      * libcc1
>      * libcody
>      * libcpp
>      * libdecnumber
>      * libgcc
>      * libgfortran
>      * libgm2
>      * libgo
>      * libgomp
>      * libiberty
>      * libitm
>      * libobjc
>      * libphobos
>      * libquadmath
>      * libssp
>      * libstdc++
>
>      These libraries are intended to be used in arbitrary contexts and as
>      a result, bugs in these libraries may be evaluated for security
>      impact.  However, some of these libraries, e.g. libgo, libphobos,
>      etc.  are not maintained in the GCC project, due to which the GCC
>      project may not be the correct point of contact for them.  You are
>      encouraged to look at README files within those library directories
>      to locate the canonical security contact point for those projects.
>

Hi, Sid

The text above states "bugs in these libraries may be evaluated for
security impact", but there is no comment about the criteria for a security
impact, unlike the GLIBC SECURITY.md document.  The text seems to imply the
"What is a security bug?" definitions from GLIBC, but the definitions are
not explicitly stated in the GCC Security policy.

Should this "Language runtime libraries" section include some of the GLIBC
"What is a security bug?" text or should the GCC "What is a security bug?"
section earlier in this document include the text with a qualification that
issues like buffer overflow, memory leaks, information disclosure, etc.
specifically apply to "Language runtime libraries" and not all components
of GCC?

Thanks, David


>
> Diagnostic libraries
> --------------------
>
>      The sanitizer library bundled in GCC is intended to be used in
>      diagnostic cases and not intended for use in sensitive environments.
>      As a result, bugs in the sanitizer will not be considered security
>      sensitive.
>
> GCC plugins
> -----------
>
>      It should be noted that GCC may execute arbitrary code loaded by a
>      user through the GCC plugin mechanism or through system preloading
>      mechanism.  Such custom code should be vetted by the user for safety
>      as bugs exposed through such code will not be considered security
>      issues.
>

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-11 15:09                           ` Paul Koning
@ 2023-08-11 15:20                             ` Siddhesh Poyarekar
  0 siblings, 0 replies; 72+ messages in thread
From: Siddhesh Poyarekar @ 2023-08-11 15:20 UTC (permalink / raw)
  To: Paul Koning
  Cc: David Edelsohn, Richard Biener, Ian Lance Taylor, Jakub Jelinek,
	GCC Patches, Carlos O'Donell, richard.sandiford

On 2023-08-11 11:09, Paul Koning wrote:
> 
> 
>> On Aug 11, 2023, at 10:36 AM, Siddhesh Poyarekar <siddhesh@gotplt.org> wrote:
>>
>> On 2023-08-10 14:50, Siddhesh Poyarekar wrote:
>>>>>        As a result, the only case for a potential security issue in all
>>>>>        these cases is when it ends up generating vulnerable output for
>>>>>        valid input source code.
>>>>
>>>> I think this leaves open the interpretation "every wrong code bug
>>>> is potentially a security bug".  I suppose that's true in a trite sense,
>>>> but not in a useful sense.  As others said earlier in the thread,
>>>> whether a wrong code bug in GCC leads to a security bug in the object
>>>> code is too application-dependent to be a useful classification for GCC.
>>>>
>>>> I think we should explicitly say that we don't generally consider wrong
>>>> code bugs to be security bugs.  Leaving it implicit is bound to lead
>>>> to misunderstanding.
>>> I see what you mean, but the context-dependence of a bug is something GCC will have to deal with, similar to how libraries have to deal with bugs.  But I agree this probably needs some more expansion.  Let me try and come up with something more detailed for that last paragraph.
>>
>> How's this:
>>
>> As a result, the only case for a potential security issue in the compiler is when it generates vulnerable application code for valid, trusted input source code.  The output application code could be considered vulnerable if it produces an actual vulnerability in the target application, specifically in the following cases:
> 
> You might make it explicit that we're talking about wrong code errors here -- in other words, the source code is correct (conforms to the standard) and the algorithm expressed in the source code does not have a vulnerability, but the generated code has semantics that differ from those of the source code such that it does have a vulnerability.

Ack, thanks for the suggestion.

> 
>> - The application dereferences an invalid memory location despite the application sources being valid.
>>
>> - The application reads from or writes to a valid but incorrect memory location, resulting in an information integrity issue or an information leak.
>>
>> - The application ends up running in an infinite loop or with severe degradation in performance despite the input sources having no such issue, resulting in a Denial of Service.  Note that correct but non-performant code is not a security issue candidate, this only applies to incorrect code that may result in performance degradation.
> 
> The last sentence somewhat contradicts the preceding one.  Perhaps "...may result in performance degradation severe enough to amount to a denial of service".

Ack, will fix that up, thanks.

Sid

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-11 15:12                     ` David Edelsohn
@ 2023-08-11 15:22                       ` Siddhesh Poyarekar
  0 siblings, 0 replies; 72+ messages in thread
From: Siddhesh Poyarekar @ 2023-08-11 15:22 UTC (permalink / raw)
  To: David Edelsohn
  Cc: Richard Biener, Ian Lance Taylor, Jakub Jelinek, GCC Patches,
	Carlos O'Donell

On 2023-08-11 11:12, David Edelsohn wrote:
> The text above states "bugs in these libraries may be evaluated for 
> security impact", but there is no comment about the criteria for a 
> security impact, unlike the GLIBC SECURITY.md document.  The text seems 
> to imply the "What is a security bug?" definitions from GLIBC, but the 
> definitions are not explicitly stated in the GCC Security policy.
> 
> Should this "Language runtime libraries" section include some of the 
> GLIBC "What is a security bug?" text or should the GCC "What is a 
> security bug?" section earlier in this document include the text with a 
> qualification that issues like buffer overflow, memory leaks, 
> information disclosure, etc. specifically apply to "Language runtime 
> libraries" and not all components of GCC?

Yes, that makes sense.  This part will likely evolve though, much like 
the glibc one did, based on reports we get over time.  I'll work it in 
and post an updated draft.

Thanks,
Sid

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-07 17:29 [RFC] GCC Security policy David Edelsohn
  2023-08-08  8:16 ` Richard Biener
@ 2023-08-14 13:26 ` Siddhesh Poyarekar
  2023-08-14 18:51   ` Richard Sandiford
  2023-08-15 23:45   ` David Malcolm
  2023-09-06 11:23 ` Siddhesh Poyarekar
  2023-09-20  7:36 ` Arnaud Charlet
  3 siblings, 2 replies; 72+ messages in thread
From: Siddhesh Poyarekar @ 2023-08-14 13:26 UTC (permalink / raw)
  To: David Edelsohn, GCC Patches; +Cc: Carlos O'Donell

Hi,

Here's the updated draft of the top part of the security policy with all 
of the recommendations incorporated.

Thanks,
Sid


What is a GCC security bug?
===========================

     A security bug is one that threatens the security of a system or
     network, or might compromise the security of data stored on it.
     In the context of GCC there are multiple ways in which this might
     happen and they're detailed below.

Compiler drivers, programs, libgccjit and support libraries
-----------------------------------------------------------

     The compiler driver processes source code, invokes other programs
     such as the assembler and linker and generates the output result,
     which may be assembly code or machine code.  It is necessary that
     all source code inputs to the compiler are trusted, since it is
     impossible for the driver to validate input source code beyond
     conformance to a programming language standard.

     The GCC JIT implementation, libgccjit, is intended to be plugged
     into applications to translate input source code in the application
     context.  Limitations that apply to the compiler
     driver, apply here too in terms of sanitizing inputs, so it is
     recommended that inputs are either sanitized by an external program
     to allow only trusted, safe execution in the context of the
     application or the JIT execution context is appropriately sandboxed
     to contain the effects of any bugs in the JIT or its generated code
     to the sandboxed environment.

     Support libraries such as libiberty, libcc1 libvtv and libcpp have
     been developed separately to share code with other tools such as
     binutils and gdb.  These libraries again have similar challenges to
     compiler drivers.  While they are expected to be robust against
     arbitrary input, they should only be used with trusted inputs.

     Libraries such as zlib that bundled into GCC to build it will be
     treated the same as the compiler drivers and programs as far as
     security coverage is concerned.  However if you find an issue in
     these libraries independent of their use in GCC, you should reach
     out to their upstream projects to report them.

     As a result, the only case for a potential security issue in all
     these cases is when it ends up generating vulnerable output for
     valid input source code.

     As a result, the only case for a potential security issue in the
     compiler is when it generates vulnerable application code for
     trusted input source code that is conforming to the relevant
     programming standard or extensions documented as supported by GCC
     and the algorithm expressed in the source code does not have the
     vulnerability.  The output application code could be considered
     vulnerable if it produces an actual vulnerability in the target
     application, specifically in the following cases:

     - The application dereferences an invalid memory location despite
       the application sources being valid.
     - The application reads from or writes to a valid but incorrect
       memory location, resulting in an information integrity issue or an
       information leak.
     - The application ends up running in an infinite loop or with
       severe degradation in performance despite the input sources having
       no such issue, resulting in a Denial of Service.  Note that
       correct but non-performant code is not a security issue candidate,
       this only applies to incorrect code that may result in performance
       degradation severe enough to amount to a denial of service.
     - The application crashes due to the generated incorrect code,
       resulting in a Denial of Service.

Language runtime libraries
--------------------------

     GCC also builds and distributes libraries that are intended to be
     used widely to implement runtime support for various programming
     languages.  These include the following:

     * libada
     * libatomic
     * libbacktrace
     * libcc1
     * libcody
     * libcpp
     * libdecnumber
     * libffi
     * libgcc
     * libgfortran
     * libgm2
     * libgo
     * libgomp
     * libiberty
     * libitm
     * libobjc
     * libphobos
     * libquadmath
     * libsanitizer
     * libssp
     * libstdc++

     These libraries are intended to be used in arbitrary contexts and as
     a result, bugs in these libraries may be evaluated for security
     impact.  However, some of these libraries, e.g. libgo, libphobos,
     etc.  are not maintained in the GCC project, due to which the GCC
     project may not be the correct point of contact for them.  You are
     encouraged to look at README files within those library directories
     to locate the canonical security contact point for those projects
     and include them in the report.  Once the issue is fixed in the
     upstream project, the fix will be synced into GCC in a future
     release.

     Most security vulnerabilities in these runtime libraries arise when
     an application uses functionality in a specific way.  As a result,
     not all bugs qualify as security relevant.  The following guidelines
     can help with the decision:

     - Buffer overflows and integer overflows should be treated as
       security issues if it is conceivable that the data triggering them
       can come from an untrusted source.
     - Bugs that cause memory corruption which is likely exploitable
       should be treated as security bugs.
     - Information disclosure can be security bugs, especially if
       exposure through applications can be determined.
     - Memory leaks and races are security bugs if they cause service
       breakage.
     - Stack overflow through unbounded alloca calls or variable-length
       arrays are security bugs if it is conceivable that the data
       triggering the overflow could come from an untrusted source.
     - Stack overflow through deep recursion and other crashes are
       security bugs if they cause service breakage.
     - Bugs that cripple the whole system (so that it doesn't even boot
       or does not run most applications) are not security bugs because
       they will not be exploitable in practice, due to general system
       instability.

Diagnostic libraries
--------------------

     The sanitizer library bundled in GCC is intended to be used in
     diagnostic cases and not intended for use in sensitive environments.
     As a result, bugs in the sanitizer will not be considered security
     sensitive.

GCC plugins
-----------

     It should be noted that GCC may execute arbitrary code loaded by a
     user through the GCC plugin mechanism or through system preloading
     mechanism.  Such custom code should be vetted by the user for safety
     as bugs exposed through such code will not be considered security
     issues.

Security hardening implemented in GCC
-------------------------------------

     GCC implements a number of security features that reduce the impact
     of security issues in applications, such as -fstack-protector,
     -fstack-clash-protection, _FORTIFY_SOURCE and so on.  A failure in
     these features functioning perfectly in all situations is not a
     security issue in itself since they're dependent on heuristics and
     may not always have full coverage for protection.

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-14 13:26 ` Siddhesh Poyarekar
@ 2023-08-14 18:51   ` Richard Sandiford
  2023-08-14 19:31     ` Siddhesh Poyarekar
  2023-08-15 23:45   ` David Malcolm
  1 sibling, 1 reply; 72+ messages in thread
From: Richard Sandiford @ 2023-08-14 18:51 UTC (permalink / raw)
  To: Siddhesh Poyarekar; +Cc: David Edelsohn, GCC Patches, Carlos O'Donell

I think it would help to clarify what the aim of the security policy is.
Specifically:

(1) What service do we want to provide to users by classifying one thing
    as a security bug and another thing as not a security bug?

(2) What service do we want to provide to the GNU community by the same
    classification?

I think it will be easier to agree on the classification if we first
agree on that.

Siddhesh Poyarekar <siddhesh@gotplt.org> writes:
> Hi,
>
> Here's the updated draft of the top part of the security policy with all 
> of the recommendations incorporated.
>
> Thanks,
> Sid
>
>
> What is a GCC security bug?
> ===========================
>
>      A security bug is one that threatens the security of a system or
>      network, or might compromise the security of data stored on it.
>      In the context of GCC there are multiple ways in which this might
>      happen and they're detailed below.
>
> Compiler drivers, programs, libgccjit and support libraries
> -----------------------------------------------------------
>
>      The compiler driver processes source code, invokes other programs
>      such as the assembler and linker and generates the output result,
>      which may be assembly code or machine code.  It is necessary that
>      all source code inputs to the compiler are trusted, since it is
>      impossible for the driver to validate input source code beyond
>      conformance to a programming language standard.
>
>      The GCC JIT implementation, libgccjit, is intended to be plugged
>      into applications to translate input source code in the application
>      context.  Limitations that apply to the compiler
>      driver, apply here too in terms of sanitizing inputs, so it is
>      recommended that inputs are either sanitized by an external program
>      to allow only trusted, safe execution in the context of the
>      application or the JIT execution context is appropriately sandboxed
>      to contain the effects of any bugs in the JIT or its generated code
>      to the sandboxed environment.
>
>      Support libraries such as libiberty, libcc1 libvtv and libcpp have
>      been developed separately to share code with other tools such as
>      binutils and gdb.  These libraries again have similar challenges to
>      compiler drivers.  While they are expected to be robust against
>      arbitrary input, they should only be used with trusted inputs.
>
>      Libraries such as zlib that bundled into GCC to build it will be
>      treated the same as the compiler drivers and programs as far as
>      security coverage is concerned.  However if you find an issue in
>      these libraries independent of their use in GCC, you should reach
>      out to their upstream projects to report them.
>
>      As a result, the only case for a potential security issue in all
>      these cases is when it ends up generating vulnerable output for
>      valid input source code.
>
>      As a result, the only case for a potential security issue in the
>      compiler is when it generates vulnerable application code for
>      trusted input source code that is conforming to the relevant
>      programming standard or extensions documented as supported by GCC
>      and the algorithm expressed in the source code does not have the
>      vulnerability.  The output application code could be considered
>      vulnerable if it produces an actual vulnerability in the target
>      application, specifically in the following cases:
>
>      - The application dereferences an invalid memory location despite
>        the application sources being valid.
>      - The application reads from or writes to a valid but incorrect
>        memory location, resulting in an information integrity issue or an
>        information leak.
>      - The application ends up running in an infinite loop or with
>        severe degradation in performance despite the input sources having
>        no such issue, resulting in a Denial of Service.  Note that
>        correct but non-performant code is not a security issue candidate,
>        this only applies to incorrect code that may result in performance
>        degradation severe enough to amount to a denial of service.
>      - The application crashes due to the generated incorrect code,
>        resulting in a Denial of Service.

One difficulty is that wrong-code bugs are rarely confined to
a particular source code structure.  Something that causes a
miscompilation of a bounds check could later be discovered to cause a
miscompilation of something that is less obviously security-sensitive.
Or the same thing could happen in reverse.  And it's common for the
same bug to be reported multiple times, against different testcases.

The proposal says that certain kinds of wrong code could be a security
bug.  But what will be the criteria for deciding whether a wrong code
bug that *could* be classified as a security bug is in fact a security
bug?  Does someone have to show that at least one security-sensitive
application is vulnerable?  Or would it be based on a reasonable worst
case (to borrow a concept from the CVSS scoring)?

If it's based on proof, then:

(1) Doesn't that put FOSS projects (and particular projects in Debian/
    Red Hat/SUSE distros) in a more elevated position relative to other
    users?  Someone would be prepared to tell the Debian security team
    about a security bug in Debian, but should they be required to tell
    the Debian security team about a security bug in proprietary code
    that's compiled with GCC?  (I'm just picking Debian as an example.)

(2) As mentioned above, proof of security sensitivity could be provided
    alongside the first report of a codegen bug, or later.  What will be
    the practical difference be between these two cases?  How will the
    experience of the reporter differ?

If it's based on reasonable worst case, then most wrong code bugs
would be security bugs.

> [...]
> Security hardening implemented in GCC
> -------------------------------------
>
>      GCC implements a number of security features that reduce the impact
>      of security issues in applications, such as -fstack-protector,
>      -fstack-clash-protection, _FORTIFY_SOURCE and so on.  A failure in
>      these features functioning perfectly in all situations is not a
>      security issue in itself since they're dependent on heuristics and
>      may not always have full coverage for protection.

I don't follow the last sentence.  Many security hardening features are
precise (or least relatively precise) about what they do.  What they do
might only offer incomplete protection.  But they can still be evaluated
on their own terms, against their documention.  (And I would argue they
can also be evaluated against reasonable expectation.)

For example, -fzero-call-used-regs=used zeros all call-used registers
that are "set or referenced in the function".  It's easy to establish
whether the option is doing this by examining the assembly code.
Zeroing those registers doesn't prevent all data leakage through
registers, and so in that sense doesn't provide "full coverage for
protection".  But that isn't what the option promises.

In a reasonable worst case scenario, the failure of a security protection
feature to provide the promised protection could allow an exploit that
wouldn't have been possible otherwise.  IMO that makes it a security bug.

Thanks,
Richard


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-14 18:51   ` Richard Sandiford
@ 2023-08-14 19:31     ` Siddhesh Poyarekar
  2023-08-14 21:16       ` Alexander Monakov
  0 siblings, 1 reply; 72+ messages in thread
From: Siddhesh Poyarekar @ 2023-08-14 19:31 UTC (permalink / raw)
  To: David Edelsohn, GCC Patches, Carlos O'Donell, richard.sandiford

On 2023-08-14 14:51, Richard Sandiford wrote:
> I think it would help to clarify what the aim of the security policy is.
> Specifically:
> 
> (1) What service do we want to provide to users by classifying one thing
>      as a security bug and another thing as not a security bug?
> 
> (2) What service do we want to provide to the GNU community by the same
>      classification?
> 
> I think it will be easier to agree on the classification if we first
> agree on that.

I actually wanted to do a talk on this at the Cauldron this year and 
*then* propose this for the gcc community, but I guess we could do this 
early :)

So the core intent of a security policy for a project is to make clear 
the security stance of the project, specifying to the extent possible 
what kind of uses are considered safe and what kinds of bugs would be 
considered security issues in the context of those uses.

There are a few advantages of doing this:

1. It makes it clear to users of the project the scope in which the 
project could be used and what safety it could reasonably expect from 
the project.  In the context of GCC for example, it cannot expect the 
compiler to do a safety check of untrusted sources; the compiler will 
consider #include "/etc/passwd" just as valid code as #include <stdio.h> 
and as a result, the onus is on the user environment to validate the 
input sources for safety.

2. It helps the security community (Mitre and other CNAs and security 
researchers) set correct expectations of the project so that they don't 
cry wolf for every segfault or ICE under the pretext that code could 
presumably be run as a service somehow and hence result in a "DoS".

3. This in turn helps stave off spurious CVE submissions that cause 
needless churn in downstream distributions.  LLVM is already starting to 
see this[1] and it's only a matter of time before people start doing 
this for GCC.

4. It helps make a distinction between important bugs and security bugs; 
they're often conflated as one and the same thing.  Security bugs are 
special because they require different handling from those that do not 
have a security impact, regardless of their actual importance. 
Unfortunately one of the reasons they're special is because there's a 
bunch of (pretty dumb) automation out there that rings alarm bells on 
every single CVE.  Without a clear understanding of the context under 
which a project can be used, these alarm bells can be made unreasonably 
loud (due to incorrect scoring, see the LLVM CVE for instance; just one 
element in that vector changes the score from 0.0 to 5.5), causing 
needless churn in not just the code base but in downstream releases and 
end user environments.

5. This exercise is also a great start in developing an understanding of 
which parts in GCC are security sensitive and in what sense.  Runtime 
libraries for example have a direct impact on application security. 
Compiler impact is a little less direct.  Hardening features have 
another effect, but it's more mitigation-oriented than direct safety. 
This also informs us about the impact of various project actions such as 
bundling third-party libraries and development and maintenance of 
tooling within GCC and will hopefully guide policies around those practices.

I hope this is a sufficient start.  We don't necessarily want to get 
into the business of acknowledging or rejecting security issues as 
upstream at the moment (but see also the CNA discussion[2] of what we 
intend to do in that space for glibc) but having uniform upstream 
guidelines would be helpful to researchers as well as downstream 
consumers to help decide what constitutes a security issue.

Thanks,
Sid

[1] https://nvd.nist.gov/vuln/detail/CVE-2023-29932
[2] 
https://inbox.sourceware.org/libc-alpha/1a44f25a-5aa3-28b7-1ecb-b3991d44ca97@gotplt.org/T/

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-14 19:31     ` Siddhesh Poyarekar
@ 2023-08-14 21:16       ` Alexander Monakov
  2023-08-14 21:50         ` Siddhesh Poyarekar
  0 siblings, 1 reply; 72+ messages in thread
From: Alexander Monakov @ 2023-08-14 21:16 UTC (permalink / raw)
  To: Siddhesh Poyarekar
  Cc: David Edelsohn, GCC Patches, Carlos O'Donell, richard.sandiford


On Mon, 14 Aug 2023, Siddhesh Poyarekar wrote:

> 1. It makes it clear to users of the project the scope in which the project
> could be used and what safety it could reasonably expect from the project.  In
> the context of GCC for example, it cannot expect the compiler to do a safety
> check of untrusted sources; the compiler will consider #include "/etc/passwd"
> just as valid code as #include <stdio.h> and as a result, the onus is on the
> user environment to validate the input sources for safety.

Whoa, no. We shouldn't make such statements unless we are prepared to explain
to users how such validation can be practically implemented, which I'm sure
we cannot in this case, due to future extensions such as the #embed directive,
and ability to obfuscate filenames using the preprocessor.

I think it would be more honest to say that crafted sources can result in
arbitrary code execution with the privileges of the user invoking the compiler,
and hence the operator may want to ensure that no sensitive data is available
to that user (via measures ranging from plain UNIX permissions, to chroots,
to virtual machines, to air-gapped computers, depending on threat model).

Resource consumption is another good reason to sandbox compilers.

Alexander

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-14 21:16       ` Alexander Monakov
@ 2023-08-14 21:50         ` Siddhesh Poyarekar
  2023-08-15  5:59           ` Alexander Monakov
  0 siblings, 1 reply; 72+ messages in thread
From: Siddhesh Poyarekar @ 2023-08-14 21:50 UTC (permalink / raw)
  To: Alexander Monakov
  Cc: David Edelsohn, GCC Patches, Carlos O'Donell, richard.sandiford

On 2023-08-14 17:16, Alexander Monakov wrote:
> 
> On Mon, 14 Aug 2023, Siddhesh Poyarekar wrote:
> 
>> 1. It makes it clear to users of the project the scope in which the project
>> could be used and what safety it could reasonably expect from the project.  In
>> the context of GCC for example, it cannot expect the compiler to do a safety
>> check of untrusted sources; the compiler will consider #include "/etc/passwd"
>> just as valid code as #include <stdio.h> and as a result, the onus is on the
>> user environment to validate the input sources for safety.
> 
> Whoa, no. We shouldn't make such statements unless we are prepared to explain
> to users how such validation can be practically implemented, which I'm sure
> we cannot in this case, due to future extensions such as the #embed directive,
> and ability to obfuscate filenames using the preprocessor.

There's no practical (programmatic) way to do such validation; it has to 
be a manual audit, which is why source code passed to the compiler has 
to be *trusted*.

> I think it would be more honest to say that crafted sources can result in
> arbitrary code execution with the privileges of the user invoking the compiler,
> and hence the operator may want to ensure that no sensitive data is available
> to that user (via measures ranging from plain UNIX permissions, to chroots,
> to virtual machines, to air-gapped computers, depending on threat model).

Right, that's what we're essentially trying to convey in the security 
policy text.  It doesn't go into mechanisms for securing execution 
(because that's really beyond the scope of the *project's* policy IMO) 
but it states unambiguously that input to the compiler must be trusted:

"""
                                           ... It is necessary that
     all source code inputs to the compiler are trusted, since it is
     impossible for the driver to validate input source code beyond
     conformance to a programming language standard...
"""

> Resource consumption is another good reason to sandbox compilers.

Agreed, we make that specific recommendation in the context of libgccjit.

Thanks,
Sid

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-14 21:50         ` Siddhesh Poyarekar
@ 2023-08-15  5:59           ` Alexander Monakov
  2023-08-15 10:33             ` Siddhesh Poyarekar
  0 siblings, 1 reply; 72+ messages in thread
From: Alexander Monakov @ 2023-08-15  5:59 UTC (permalink / raw)
  To: Siddhesh Poyarekar
  Cc: David Edelsohn, GCC Patches, Carlos O'Donell, richard.sandiford


On Mon, 14 Aug 2023, Siddhesh Poyarekar wrote:

> There's no practical (programmatic) way to do such validation; it has to be a
> manual audit, which is why source code passed to the compiler has to be
> *trusted*.

No, I do not think that is a logical conclusion. What is the problem with
passing untrusted code to a sandboxed compiler?

> Right, that's what we're essentially trying to convey in the security policy
> text.  It doesn't go into mechanisms for securing execution (because that's
> really beyond the scope of the *project's* policy IMO) but it states
> unambiguously that input to the compiler must be trusted:
> 
> """
>                                           ... It is necessary that
>     all source code inputs to the compiler are trusted, since it is
>     impossible for the driver to validate input source code beyond
>     conformance to a programming language standard...
> """

I see two issues with this. First, it reads as if people wishing to build
not-entirely-trusted sources need to seek some other compiler, as somehow
we seem to imply that sandboxing GCC is out of the question.

Second, I take issue with the last part of the quoted text (language
conformance): verifying standards conformance is also impossible
(consider UB that manifests only during linking or dynamic loading)
so GCC is only doing that on a best-effort basis with no guarantees.

Alexander

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-15  5:59           ` Alexander Monakov
@ 2023-08-15 10:33             ` Siddhesh Poyarekar
  2023-08-15 14:07               ` Alexander Monakov
  0 siblings, 1 reply; 72+ messages in thread
From: Siddhesh Poyarekar @ 2023-08-15 10:33 UTC (permalink / raw)
  To: Alexander Monakov
  Cc: David Edelsohn, GCC Patches, Carlos O'Donell, richard.sandiford

On 2023-08-15 01:59, Alexander Monakov wrote:
> 
> On Mon, 14 Aug 2023, Siddhesh Poyarekar wrote:
> 
>> There's no practical (programmatic) way to do such validation; it has to be a
>> manual audit, which is why source code passed to the compiler has to be
>> *trusted*.
> 
> No, I do not think that is a logical conclusion. What is the problem with
> passing untrusted code to a sandboxed compiler?
> 
>> Right, that's what we're essentially trying to convey in the security policy
>> text.  It doesn't go into mechanisms for securing execution (because that's
>> really beyond the scope of the *project's* policy IMO) but it states
>> unambiguously that input to the compiler must be trusted:
>>
>> """
>>                                            ... It is necessary that
>>      all source code inputs to the compiler are trusted, since it is
>>      impossible for the driver to validate input source code beyond
>>      conformance to a programming language standard...
>> """
> 
> I see two issues with this. First, it reads as if people wishing to build
> not-entirely-trusted sources need to seek some other compiler, as somehow
> we seem to imply that sandboxing GCC is out of the question.
> 
> Second, I take issue with the last part of the quoted text (language
> conformance): verifying standards conformance is also impossible
> (consider UB that manifests only during linking or dynamic loading)
> so GCC is only doing that on a best-effort basis with no guarantees.

Does this as the first paragraph address your concerns:

The compiler driver processes source code, invokes other programs such 
as the assembler and linker and generates the output result, which may 
be assembly code or machine code.  It is necessary that all source code 
inputs to the compiler are trusted, since it is impossible for the 
driver to validate input source code for safety.  For untrusted code 
should compilation should be done inside a sandboxed environment to 
ensure that it does not compromise the development environment.  Note 
that this still does not guarantee safety of the produced output 
programs and that such programs should still either be analyzed 
thoroughly for safety or run only inside a sandbox or an isolated system 
to avoid compromising the execution environment.

Thanks,
Sid

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-15 10:33             ` Siddhesh Poyarekar
@ 2023-08-15 14:07               ` Alexander Monakov
  2023-08-15 14:54                 ` Paul Koning
  2023-08-15 19:13                 ` Siddhesh Poyarekar
  0 siblings, 2 replies; 72+ messages in thread
From: Alexander Monakov @ 2023-08-15 14:07 UTC (permalink / raw)
  To: Siddhesh Poyarekar
  Cc: David Edelsohn, GCC Patches, Carlos O'Donell, richard.sandiford


On Tue, 15 Aug 2023, Siddhesh Poyarekar wrote:

> Does this as the first paragraph address your concerns:

Thanks, this is nicer (see notes below). My main concern is that we shouldn't
pretend there's some method of verifying that arbitrary source code is "safe"
to pass to an unsandboxed compiler, nor should we push the responsibility of
doing that on users.

> The compiler driver processes source code, invokes other programs such as the
> assembler and linker and generates the output result, which may be assembly
> code or machine code.  It is necessary that all source code inputs to the
> compiler are trusted, since it is impossible for the driver to validate input
> source code for safety.

The statement begins with "It is necessary", but the next statement offers
an alternative in case the code is untrusted. This is a contradiction.
Is it necessary or not in the end?

I'd suggest to drop this statement and instead make a brief note that
compiling crafted/untrusted sources can result in arbitrary code execution
and unconstrained resource consumption in the compiler.

> For untrusted code should compilation should be done
                     ^^^^^^
		     typo (spurious 'should')
		     
> inside a sandboxed environment to ensure that it does not compromise the
> development environment.  Note that this still does not guarantee safety of
> the produced output programs and that such programs should still either be
> analyzed thoroughly for safety or run only inside a sandbox or an isolated
> system to avoid compromising the execution environment.

The last statement seems to be a new addition. It is too broad and again
makes a reference to analysis that appears quite theoretical. It might be
better to drop this (and instead talk in more specific terms about any
guarantees that produced binary code matches security properties intended
by the sources; I believe Richard Sandiford raised this previously).

Thanks.
Alexander

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-15 14:07               ` Alexander Monakov
@ 2023-08-15 14:54                 ` Paul Koning
  2023-08-15 19:13                 ` Siddhesh Poyarekar
  1 sibling, 0 replies; 72+ messages in thread
From: Paul Koning @ 2023-08-15 14:54 UTC (permalink / raw)
  To: Alexander Monakov
  Cc: Siddhesh Poyarekar, David Edelsohn, GCC Patches,
	Carlos O'Donell, richard.sandiford



> On Aug 15, 2023, at 10:07 AM, Alexander Monakov <amonakov@ispras.ru> wrote:
> 
> 
> On Tue, 15 Aug 2023, Siddhesh Poyarekar wrote:
> 
>> Does this as the first paragraph address your concerns:
> 
> Thanks, this is nicer (see notes below). My main concern is that we shouldn't
> pretend there's some method of verifying that arbitrary source code is "safe"
> to pass to an unsandboxed compiler, nor should we push the responsibility of
> doing that on users.

Perhaps, but clearly the compiler can't do it ("Halting problem"...) so it has to be clear that the solution must be outside the compiler.  

	paul


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-15 14:07               ` Alexander Monakov
  2023-08-15 14:54                 ` Paul Koning
@ 2023-08-15 19:13                 ` Siddhesh Poyarekar
  2023-08-15 23:07                   ` Alexander Monakov
  1 sibling, 1 reply; 72+ messages in thread
From: Siddhesh Poyarekar @ 2023-08-15 19:13 UTC (permalink / raw)
  To: Alexander Monakov
  Cc: David Edelsohn, GCC Patches, Carlos O'Donell, richard.sandiford

On 2023-08-15 10:07, Alexander Monakov wrote:
> 
> On Tue, 15 Aug 2023, Siddhesh Poyarekar wrote:
> 
>> Does this as the first paragraph address your concerns:
> 
> Thanks, this is nicer (see notes below). My main concern is that we shouldn't
> pretend there's some method of verifying that arbitrary source code is "safe"
> to pass to an unsandboxed compiler, nor should we push the responsibility of
> doing that on users.

But responsibility would be pushed to users, wouldn't it?

>> The compiler driver processes source code, invokes other programs such as the
>> assembler and linker and generates the output result, which may be assembly
>> code or machine code.  It is necessary that all source code inputs to the
>> compiler are trusted, since it is impossible for the driver to validate input
>> source code for safety.
> 
> The statement begins with "It is necessary", but the next statement offers
> an alternative in case the code is untrusted. This is a contradiction.
> Is it necessary or not in the end?
> 
> I'd suggest to drop this statement and instead make a brief note that
> compiling crafted/untrusted sources can result in arbitrary code execution
> and unconstrained resource consumption in the compiler.

So:

The compiler driver processes source code, invokes other programs such 
as the assembler and linker and generates the output result, which may 
be assembly code or machine code.  Compiling untrusted sources can 
result in arbitrary code execution and unconstrained resource 
consumption in the compiler. As a result, compilation of such code 
should be done inside a sandboxed environment to ensure that it does not 
compromise the development environment.

>> For untrusted code should compilation should be done
>                       ^^^^^^
> 		     typo (spurious 'should')

Ack, thanks.

> 		
>> inside a sandboxed environment to ensure that it does not compromise the
>> development environment.  Note that this still does not guarantee safety of
>> the produced output programs and that such programs should still either be
>> analyzed thoroughly for safety or run only inside a sandbox or an isolated
>> system to avoid compromising the execution environment.
> 
> The last statement seems to be a new addition. It is too broad and again
> makes a reference to analysis that appears quite theoretical. It might be
> better to drop this (and instead talk in more specific terms about any
> guarantees that produced binary code matches security properties intended
> by the sources; I believe Richard Sandiford raised this previously).

OK, so I actually cover this at the end of the section; Richard's point 
AFAICT was about hardening, which I added another note for to make it 
explicit that missed hardening does not constitute a CVE-worthy threat:

     As a result, the only case for a potential security issue in the
     compiler is when it generates vulnerable application code for
     trusted input source code that is conforming to the relevant
     programming standard or extensions documented as supported by GCC
     and the algorithm expressed in the source code does not have the
     vulnerability.  The output application code could be considered
     vulnerable if it produces an actual vulnerability in the target
     application, specifically in the following cases:

     - The application dereferences an invalid memory location despite
       the application sources being valid.
     - The application reads from or writes to a valid but incorrect
       memory location, resulting in an information integrity issue or an
       information leak.
     - The application ends up running in an infinite loop or with
       severe degradation in performance despite the input sources having
       no such issue, resulting in a Denial of Service.  Note that
       correct but non-performant code is not a security issue candidate,
       this only applies to incorrect code that may result in performance
       degradation severe enough to amount to a denial of service.
     - The application crashes due to the generated incorrect code,
       resulting in a Denial of Service.

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-15 19:13                 ` Siddhesh Poyarekar
@ 2023-08-15 23:07                   ` Alexander Monakov
  2023-08-15 23:45                     ` David Edelsohn
                                       ` (2 more replies)
  0 siblings, 3 replies; 72+ messages in thread
From: Alexander Monakov @ 2023-08-15 23:07 UTC (permalink / raw)
  To: Siddhesh Poyarekar
  Cc: David Edelsohn, GCC Patches, Carlos O'Donell, richard.sandiford


On Tue, 15 Aug 2023, Siddhesh Poyarekar wrote:

> > Thanks, this is nicer (see notes below). My main concern is that we
> > shouldn't pretend there's some method of verifying that arbitrary source
> > code is "safe" to pass to an unsandboxed compiler, nor should we push
> > the responsibility of doing that on users.
> 
> But responsibility would be pushed to users, wouldn't it?

Making users responsible for verifying that sources are "safe" is not okay
(we cannot teach them how to do that since there's no general method).
Making users responsible for sandboxing the compiler is fine (there's
a range of sandboxing solutions, from which they can choose according
to their requirements and threat model). Sorry about the ambiguity.

> So:
> 
> The compiler driver processes source code, invokes other programs such as the
> assembler and linker and generates the output result, which may be assembly
> code or machine code.  Compiling untrusted sources can result in arbitrary
> code execution and unconstrained resource consumption in the compiler. As a
> result, compilation of such code should be done inside a sandboxed environment
> to ensure that it does not compromise the development environment.

I'm happy with this, thanks for bearing with me.

> >> inside a sandboxed environment to ensure that it does not compromise the
> >> development environment.  Note that this still does not guarantee safety of
> >> the produced output programs and that such programs should still either be
> >> analyzed thoroughly for safety or run only inside a sandbox or an isolated
> >> system to avoid compromising the execution environment.
> > 
> > The last statement seems to be a new addition. It is too broad and again
> > makes a reference to analysis that appears quite theoretical. It might be
> > better to drop this (and instead talk in more specific terms about any
> > guarantees that produced binary code matches security properties intended
> > by the sources; I believe Richard Sandiford raised this previously).
> 
> OK, so I actually cover this at the end of the section; Richard's point AFAICT
> was about hardening, which I added another note for to make it explicit that
> missed hardening does not constitute a CVE-worthy threat:

Thanks for the reminder. To illustrate what I was talking about, let me give
two examples:

1) safety w.r.t timing attacks: even if the source code is written in
a manner that looks timing-safe, it might be transformed in a way that
mounting a timing attack on the resulting machine code is possible;

2) safety w.r.t information leaks: even if the source code attempts
to discard sensitive data (such as passwords and keys) immediately
after use, (partial) copies of that data may be left on stack and
in registers, to be leaked later via a different vulnerability.

For both 1) and 2), GCC is not engineered to respect such properties
during optimization and code generation, so it's not appropriate for such
tasks (a possible solution is to isolate such sensitive functions to
separate files, compile to assembly, inspect the assembly to check that it
still has the required properties, and use the inspected asm in subsequent
builds instead of the original high-level source).

Cheers.
Alexander

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-14 13:26 ` Siddhesh Poyarekar
  2023-08-14 18:51   ` Richard Sandiford
@ 2023-08-15 23:45   ` David Malcolm
  2023-08-16  8:25     ` Alexander Monakov
  1 sibling, 1 reply; 72+ messages in thread
From: David Malcolm @ 2023-08-15 23:45 UTC (permalink / raw)
  To: Siddhesh Poyarekar, David Edelsohn, GCC Patches; +Cc: Carlos O'Donell

On Mon, 2023-08-14 at 09:26 -0400, Siddhesh Poyarekar wrote:
> Hi,
> 
> Here's the updated draft of the top part of the security policy with all 
> of the recommendations incorporated.
> 
> Thanks,
> Sid
> 
> 
> What is a GCC security bug?
> ===========================
> 
>      A security bug is one that threatens the security of a system or
>      network, or might compromise the security of data stored on it.
>      In the context of GCC there are multiple ways in which this might
>      happen and they're detailed below.
> 
> Compiler drivers, programs, libgccjit and support libraries
> -----------------------------------------------------------
> 
>      The compiler driver processes source code, invokes other programs
>      such as the assembler and linker and generates the output result,
>      which may be assembly code or machine code.  It is necessary that
>      all source code inputs to the compiler are trusted, since it is
>      impossible for the driver to validate input source code beyond
>      conformance to a programming language standard.
> 
>      The GCC JIT implementation, libgccjit, is intended to be plugged
>      into applications to translate input source code in the application
>      context.  Limitations that apply to the compiler
>      driver, apply here too in terms of sanitizing inputs, so it is
>      recommended that inputs are either sanitized by an external program
>      to allow only trusted, safe execution in the context of the
>      application or the JIT execution context is appropriately sandboxed
>      to contain the effects of any bugs in the JIT or its generated code
>      to the sandboxed environment.

I'd prefer to reword this, as libgccjit was a poor choice of name for
the library (sorry!), to make it clearer it can be used for both ahead-
of-time and just-in-time compilation, and that as used for compilation,
the host considerations apply, not just those of the generated target
code.

How about:

     The libgccjit library can, despite the name, be used both for
     ahead-of-time compilation and for just-in-compilation.  In both
     cases it can be used to translate input representations (such as
     source code) in the application context; in the latter case the
     generated code is also run in the application context.
     Limitations that apply to the compiler driver, apply here too in
     terms of sanitizing inputs, so it is recommended that inputs are
     either sanitized by an external program to allow only trusted,
     safe compilation and execution in the context of the application,
     or that both the compilation *and* execution context of the code
     are appropriately sandboxed to contain the effects of any bugs in
     libgccjit, the application code using it, or its generated code to
     the sandboxed environment.

...or similar.

[...snip...]

Thanks
Dave


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-15 23:07                   ` Alexander Monakov
@ 2023-08-15 23:45                     ` David Edelsohn
  2023-08-16  0:37                       ` Alexander Monakov
  2023-08-16  9:05                     ` Toon Moene
  2023-08-16 12:19                     ` Siddhesh Poyarekar
  2 siblings, 1 reply; 72+ messages in thread
From: David Edelsohn @ 2023-08-15 23:45 UTC (permalink / raw)
  To: Alexander Monakov
  Cc: Siddhesh Poyarekar, GCC Patches, Carlos O'Donell, richard.sandiford

[-- Attachment #1: Type: text/plain, Size: 4735 bytes --]

On Tue, Aug 15, 2023 at 7:07 PM Alexander Monakov <amonakov@ispras.ru>
wrote:

>
> On Tue, 15 Aug 2023, Siddhesh Poyarekar wrote:
>
> > > Thanks, this is nicer (see notes below). My main concern is that we
> > > shouldn't pretend there's some method of verifying that arbitrary
> source
> > > code is "safe" to pass to an unsandboxed compiler, nor should we push
> > > the responsibility of doing that on users.
> >
> > But responsibility would be pushed to users, wouldn't it?
>
> Making users responsible for verifying that sources are "safe" is not okay
> (we cannot teach them how to do that since there's no general method).
> Making users responsible for sandboxing the compiler is fine (there's
> a range of sandboxing solutions, from which they can choose according
> to their requirements and threat model). Sorry about the ambiguity.
>

Alex.

The compiler should faithfully implement the algorithms described by the
programmer.  The compiler is responsible if it generates incorrect code for
a well-defined, language-conforming program.  The compiler cannot be
responsible for security issues inherent in the user code, whether that
causes the compiler to function in a manner that deteriorates adversely
affects the system or generates code that behaves in a manner that
adversely affects the system.

If "safe" is the wrong word. What word would you suggest?


> > So:
> >
> > The compiler driver processes source code, invokes other programs such
> as the
> > assembler and linker and generates the output result, which may be
> assembly
> > code or machine code.  Compiling untrusted sources can result in
> arbitrary
> > code execution and unconstrained resource consumption in the compiler.
> As a
> > result, compilation of such code should be done inside a sandboxed
> environment
> > to ensure that it does not compromise the development environment.
>
> I'm happy with this, thanks for bearing with me.
>
> > >> inside a sandboxed environment to ensure that it does not compromise
> the
> > >> development environment.  Note that this still does not guarantee
> safety of
> > >> the produced output programs and that such programs should still
> either be
> > >> analyzed thoroughly for safety or run only inside a sandbox or an
> isolated
> > >> system to avoid compromising the execution environment.
> > >
> > > The last statement seems to be a new addition. It is too broad and
> again
> > > makes a reference to analysis that appears quite theoretical. It might
> be
> > > better to drop this (and instead talk in more specific terms about any
> > > guarantees that produced binary code matches security properties
> intended
> > > by the sources; I believe Richard Sandiford raised this previously).
> >
> > OK, so I actually cover this at the end of the section; Richard's point
> AFAICT
> > was about hardening, which I added another note for to make it explicit
> that
> > missed hardening does not constitute a CVE-worthy threat:
>
> Thanks for the reminder. To illustrate what I was talking about, let me
> give
> two examples:
>
> 1) safety w.r.t timing attacks: even if the source code is written in
> a manner that looks timing-safe, it might be transformed in a way that
> mounting a timing attack on the resulting machine code is possible;
>
> 2) safety w.r.t information leaks: even if the source code attempts
> to discard sensitive data (such as passwords and keys) immediately
> after use, (partial) copies of that data may be left on stack and
> in registers, to be leaked later via a different vulnerability.
>
> For both 1) and 2), GCC is not engineered to respect such properties
> during optimization and code generation, so it's not appropriate for such
> tasks (a possible solution is to isolate such sensitive functions to
> separate files, compile to assembly, inspect the assembly to check that it
> still has the required properties, and use the inspected asm in subsequent
> builds instead of the original high-level source).
>

At some point the system tools need to respect the programmer or operator.
There is a difference between writing "Hello, World" and writing
performance critical or safety critical code.  That is the responsibility
of the programmer and the development team to choose the right software
engineers and right tools.  And to have the development environment and
checks in place to ensure that the results are meeting the requirements.

It is not the role of GCC or its security policy to tell people how to do
their job or hobby.  This isn't a safety tag required to be attached to a
new mattress.

Thanks, David


>
> Cheers.
> Alexander
>

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-15 23:45                     ` David Edelsohn
@ 2023-08-16  0:37                       ` Alexander Monakov
  2023-08-16  0:50                         ` Paul Koning
  0 siblings, 1 reply; 72+ messages in thread
From: Alexander Monakov @ 2023-08-16  0:37 UTC (permalink / raw)
  To: David Edelsohn
  Cc: Siddhesh Poyarekar, GCC Patches, Carlos O'Donell, richard.sandiford


On Tue, 15 Aug 2023, David Edelsohn wrote:

> > Making users responsible for verifying that sources are "safe" is not okay
> > (we cannot teach them how to do that since there's no general method).
> > Making users responsible for sandboxing the compiler is fine (there's
> > a range of sandboxing solutions, from which they can choose according
> > to their requirements and threat model). Sorry about the ambiguity.
> >
> 
> Alex.
> 
> The compiler should faithfully implement the algorithms described by the
> programmer.  The compiler is responsible if it generates incorrect code for
> a well-defined, language-conforming program.  The compiler cannot be
> responsible for security issues inherent in the user code, whether that
> causes the compiler to function in a manner that deteriorates adversely
> affects the system or generates code that behaves in a manner that
> adversely affects the system.
> 
> If "safe" is the wrong word. What word would you suggest?

I think "safe" is the right word here. We also used "trusted" in a similar
sense. I believe we were on the same page about that.

> > For both 1) and 2), GCC is not engineered to respect such properties
> > during optimization and code generation, so it's not appropriate for such
> > tasks (a possible solution is to isolate such sensitive functions to
> > separate files, compile to assembly, inspect the assembly to check that it
> > still has the required properties, and use the inspected asm in subsequent
> > builds instead of the original high-level source).
> >
> 
> At some point the system tools need to respect the programmer or operator.
> There is a difference between writing "Hello, World" and writing
> performance critical or safety critical code.  That is the responsibility
> of the programmer and the development team to choose the right software
> engineers and right tools.  And to have the development environment and
> checks in place to ensure that the results are meeting the requirements.
> 
> It is not the role of GCC or its security policy to tell people how to do
> their job or hobby.  This isn't a safety tag required to be attached to a
> new mattress.

Yes (though I'm afraid the analogy with the mattress is a bit lost on me).
Those examples were meant to illustrate the point I tried to make earlier,
not as additions proposed for the Security Policy. Specific examples
where we can tell people in advance that compiler output needs to be
verified, because the compiler is not engineered to preserve those
security-relevant properties from the source code (and we would not
accept such accidents as security bugs).

Granted, it is a bit of a stretch since the notion of timing-safety is
not really well-defined for C source code, but I didn't come up with
better examples.

Alexander

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-16  0:37                       ` Alexander Monakov
@ 2023-08-16  0:50                         ` Paul Koning
  2023-08-16  7:53                           ` Alexander Monakov
  0 siblings, 1 reply; 72+ messages in thread
From: Paul Koning @ 2023-08-16  0:50 UTC (permalink / raw)
  To: Alexander Monakov
  Cc: David Edelsohn, Siddhesh Poyarekar, GCC Patches,
	Carlos O'Donell, richard.sandiford



> On Aug 15, 2023, at 8:37 PM, Alexander Monakov <amonakov@ispras.ru> wrote:
> 
>> ...
>> At some point the system tools need to respect the programmer or operator.
>> There is a difference between writing "Hello, World" and writing
>> performance critical or safety critical code.  That is the responsibility
>> of the programmer and the development team to choose the right software
>> engineers and right tools.  And to have the development environment and
>> checks in place to ensure that the results are meeting the requirements.
>> 
>> It is not the role of GCC or its security policy to tell people how to do
>> their job or hobby.  This isn't a safety tag required to be attached to a
>> new mattress.
> 
> Yes (though I'm afraid the analogy with the mattress is a bit lost on me).
> Those examples were meant to illustrate the point I tried to make earlier,
> not as additions proposed for the Security Policy. Specific examples
> where we can tell people in advance that compiler output needs to be
> verified, because the compiler is not engineered to preserve those
> security-relevant properties from the source code (and we would not
> accept such accidents as security bugs).

Now I'm confused.  I thought the whole point of what GCC is trying to, and wants to document, is that it DOES preserve security properties.  If the source code is standards-compliant and contains algorithms free of security holes, then the compiler is supposed to deliver output code that is likewise free of holes -- in other words, the transformation performed by GCC does not introduce holes in a hole-free input.

> Granted, it is a bit of a stretch since the notion of timing-safety is
> not really well-defined for C source code, but I didn't come up with
> better examples.

Is "timing-safety" a security property?  Not the way I understand that term.  It sounds like another way to say that the code meets real time constraints or requirements.  No, compilers don't help with that (at least C doesn't -- Ada might be better here but I don't know enough).  For sufficiently strict requirements you'd have to examine both the generated machine code and understand, in gruesome detail, what the timing behaviors of the executing hardware are.  Good luck if it's a modern billion-transistor machine.

Again, I don't see that as a security property.  If it's considered desirable to say something about this, fine, but the words Siddesh crafted don't fit for that kind of property.

	paul


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-16  0:50                         ` Paul Koning
@ 2023-08-16  7:53                           ` Alexander Monakov
  2023-08-16 13:06                             ` Paul Koning
  0 siblings, 1 reply; 72+ messages in thread
From: Alexander Monakov @ 2023-08-16  7:53 UTC (permalink / raw)
  To: Paul Koning
  Cc: David Edelsohn, Siddhesh Poyarekar, GCC Patches,
	Carlos O'Donell, richard.sandiford


On Tue, 15 Aug 2023, Paul Koning wrote:

> Now I'm confused.  I thought the whole point of what GCC is trying to, and
> wants to document, is that it DOES preserve security properties.  If the
> source code is standards-compliant and contains algorithms free of security
> holes, then the compiler is supposed to deliver output code that is likewise
> free of holes -- in other words, the transformation performed by GCC does not
> introduce holes in a hole-free input.

Yes, we seem to broadly agree here. The text given by Siddhesh enumerates
scenarios were an incorrent transform could be considered a security bug.
My examples explore situations outside of those scenarios, picking two
popular security properties that cannot be always attained by writing
C source that vaguely appears to conform, and expecting the compiler
to translate in to machine code that actually conforms.

> > Granted, it is a bit of a stretch since the notion of timing-safety is
> > not really well-defined for C source code, but I didn't come up with
> > better examples.
> 
> Is "timing-safety" a security property?  Not the way I understand that
> term.  It sounds like another way to say that the code meets real time
> constraints or requirements.

I meant in the sense of not admitting timing attacks:
https://en.wikipedia.org/wiki/Timing_attack

> No, compilers don't help with that (at least C doesn't -- Ada might be
> better here but I don't know enough).  For sufficiently strict
> requirements you'd have to examine both the generated machine code and
> understand, in gruesome detail, what the timing behaviors of the executing
> hardware are.  Good luck if it's a modern billion-transistor machine.

Yes. On the other hand, the reality in the FOSS ecosystem is that
cryptographic libraries heavily lean on the ability to express
a constant-time algorithm in C and get machine code that is actually
constant-time. There's a bit of a conflict here between what we
can promise and what people might expect of GCC, and it seems
relevant when discussing what goes into the Security Policy.

Thanks.
Alexander

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-15 23:45   ` David Malcolm
@ 2023-08-16  8:25     ` Alexander Monakov
  2023-08-16 11:39       ` Siddhesh Poyarekar
  0 siblings, 1 reply; 72+ messages in thread
From: Alexander Monakov @ 2023-08-16  8:25 UTC (permalink / raw)
  To: David Malcolm
  Cc: Siddhesh Poyarekar, David Edelsohn, GCC Patches, Carlos O'Donell

[-- Attachment #1: Type: text/plain, Size: 1480 bytes --]


On Tue, 15 Aug 2023, David Malcolm via Gcc-patches wrote:

> I'd prefer to reword this, as libgccjit was a poor choice of name for
> the library (sorry!), to make it clearer it can be used for both ahead-
> of-time and just-in-time compilation, and that as used for compilation,
> the host considerations apply, not just those of the generated target
> code.
> 
> How about:
> 
>      The libgccjit library can, despite the name, be used both for
>      ahead-of-time compilation and for just-in-compilation.  In both
>      cases it can be used to translate input representations (such as
>      source code) in the application context; in the latter case the
>      generated code is also run in the application context.
>      Limitations that apply to the compiler driver, apply here too in
>      terms of sanitizing inputs, so it is recommended that inputs are

Unfortunately the lines that follow:

>      either sanitized by an external program to allow only trusted,
>      safe compilation and execution in the context of the application,

again make a reference to a purely theoretical "external program" that
is not going to exist in reality, and I made a fuss about that in another
subthread (sorry Siddhesh). We shouldn't speak as if this solution is
actually available to users.

I know this is not the main point of your email, but we came up with
a better wording for the compiler driver, and it would be good to align
this text with that.

Thanks.
Alexander

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-15 23:07                   ` Alexander Monakov
  2023-08-15 23:45                     ` David Edelsohn
@ 2023-08-16  9:05                     ` Toon Moene
  2023-08-16 12:19                     ` Siddhesh Poyarekar
  2 siblings, 0 replies; 72+ messages in thread
From: Toon Moene @ 2023-08-16  9:05 UTC (permalink / raw)
  To: gcc-patches

On 8/16/23 01:07, Alexander Monakov wrote:

> On Tue, 15 Aug 2023, Siddhesh Poyarekar wrote:
> 
>>> Thanks, this is nicer (see notes below). My main concern is that we
>>> shouldn't pretend there's some method of verifying that arbitrary source
>>> code is "safe" to pass to an unsandboxed compiler, nor should we push
>>> the responsibility of doing that on users.
>>
>> But responsibility would be pushed to users, wouldn't it?
> 
> Making users responsible for verifying that sources are "safe" is not okay
> (we cannot teach them how to do that since there's no general method).

While there is no "general method" for this, there exists a whole 
Working Group under ISO whose responsibility is to identify and list 
vulnerabilities in programming languages - Working Group 23.

Its web page is: https://www.open-std.org/jtc1/sc22/wg23/

Kind regards,

-- 
Toon Moene - e-mail: toon@moene.org - phone: +31 346 214290
Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-16  8:25     ` Alexander Monakov
@ 2023-08-16 11:39       ` Siddhesh Poyarekar
  2023-08-16 11:50         ` Alexander Monakov
  0 siblings, 1 reply; 72+ messages in thread
From: Siddhesh Poyarekar @ 2023-08-16 11:39 UTC (permalink / raw)
  To: Alexander Monakov, David Malcolm
  Cc: David Edelsohn, GCC Patches, Carlos O'Donell

On 2023-08-16 04:25, Alexander Monakov wrote:
> 
> On Tue, 15 Aug 2023, David Malcolm via Gcc-patches wrote:
> 
>> I'd prefer to reword this, as libgccjit was a poor choice of name for
>> the library (sorry!), to make it clearer it can be used for both ahead-
>> of-time and just-in-time compilation, and that as used for compilation,
>> the host considerations apply, not just those of the generated target
>> code.
>>
>> How about:
>>
>>       The libgccjit library can, despite the name, be used both for
>>       ahead-of-time compilation and for just-in-compilation.  In both
>>       cases it can be used to translate input representations (such as
>>       source code) in the application context; in the latter case the
>>       generated code is also run in the application context.
>>       Limitations that apply to the compiler driver, apply here too in
>>       terms of sanitizing inputs, so it is recommended that inputs are

Thanks David!

> 
> Unfortunately the lines that follow:
> 
>>       either sanitized by an external program to allow only trusted,
>>       safe compilation and execution in the context of the application,
> 
> again make a reference to a purely theoretical "external program" that
> is not going to exist in reality, and I made a fuss about that in another
> subthread (sorry Siddhesh). We shouldn't speak as if this solution is
> actually available to users.
> 
> I know this is not the main point of your email, but we came up with
> a better wording for the compiler driver, and it would be good to align
> this text with that.

How about:

     The libgccjit library can, despite the name, be used both for
     ahead-of-time compilation and for just-in-compilation.  In both
     cases it can be used to translate input representations (such as
     source code) in the application context; in the latter case the
     generated code is also run in the application context.

     Limitations that apply to the compiler driver, apply here too in
     terms of sanitizing inputs and it is recommended that both the
     compilation *and* execution context of the code are appropriately
     sandboxed to contain the effects of any bugs in libgccjit, the
     application code using it, or its generated code to the sandboxed
     environment.

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-16 11:39       ` Siddhesh Poyarekar
@ 2023-08-16 11:50         ` Alexander Monakov
  0 siblings, 0 replies; 72+ messages in thread
From: Alexander Monakov @ 2023-08-16 11:50 UTC (permalink / raw)
  To: Siddhesh Poyarekar
  Cc: David Malcolm, David Edelsohn, GCC Patches, Carlos O'Donell

[-- Attachment #1: Type: text/plain, Size: 1477 bytes --]

> > Unfortunately the lines that follow:
> > 
> >>       either sanitized by an external program to allow only trusted,
> >>       safe compilation and execution in the context of the application,
> > 
> > again make a reference to a purely theoretical "external program" that
> > is not going to exist in reality, and I made a fuss about that in another
> > subthread (sorry Siddhesh). We shouldn't speak as if this solution is
> > actually available to users.
> > 
> > I know this is not the main point of your email, but we came up with
> > a better wording for the compiler driver, and it would be good to align
> > this text with that.
> 
> How about:
> 
>     The libgccjit library can, despite the name, be used both for
>     ahead-of-time compilation and for just-in-compilation.  In both
>     cases it can be used to translate input representations (such as
>     source code) in the application context; in the latter case the
>     generated code is also run in the application context.
> 
>     Limitations that apply to the compiler driver, apply here too in
>     terms of sanitizing inputs and it is recommended that both the

I'd prefer 'trusting inputs' instead of 'sanitizing inputs' above.

>     compilation *and* execution context of the code are appropriately
>     sandboxed to contain the effects of any bugs in libgccjit, the
>     application code using it, or its generated code to the sandboxed
>     environment.

*thumbs up*

Thanks.
Alexander

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-15 23:07                   ` Alexander Monakov
  2023-08-15 23:45                     ` David Edelsohn
  2023-08-16  9:05                     ` Toon Moene
@ 2023-08-16 12:19                     ` Siddhesh Poyarekar
  2023-08-16 15:06                       ` Alexander Monakov
  2 siblings, 1 reply; 72+ messages in thread
From: Siddhesh Poyarekar @ 2023-08-16 12:19 UTC (permalink / raw)
  To: Alexander Monakov
  Cc: David Edelsohn, GCC Patches, Carlos O'Donell, richard.sandiford

On 2023-08-15 19:07, Alexander Monakov wrote:
> 
> On Tue, 15 Aug 2023, Siddhesh Poyarekar wrote:
> 
>>> Thanks, this is nicer (see notes below). My main concern is that we
>>> shouldn't pretend there's some method of verifying that arbitrary source
>>> code is "safe" to pass to an unsandboxed compiler, nor should we push
>>> the responsibility of doing that on users.
>>
>> But responsibility would be pushed to users, wouldn't it?
> 
> Making users responsible for verifying that sources are "safe" is not okay
> (we cannot teach them how to do that since there's no general method).
> Making users responsible for sandboxing the compiler is fine (there's
> a range of sandboxing solutions, from which they can choose according
> to their requirements and threat model). Sorry about the ambiguity.

No I understood the distinction you're trying to make, I just wanted to 
point out that the effect isn't all that different.  The intent of the 
wording is not to prescribe a solution, but to describe what the 
compiler cannot do and hence, users must find a way to do this.  I think 
we have a consensus on this part of the wording though because we're not 
really responsible for the prescription here and I'm happy with just 
asking users to sandbox.

I suppose it's kinda like saying "don't try this at home".  You know 
many will and some will break their leg while others will come out of it 
feeling invincible.  Our job is to let them know that they will likely 
break their leg :)

>>>> inside a sandboxed environment to ensure that it does not compromise the
>>>> development environment.  Note that this still does not guarantee safety of
>>>> the produced output programs and that such programs should still either be
>>>> analyzed thoroughly for safety or run only inside a sandbox or an isolated
>>>> system to avoid compromising the execution environment.
>>>
>>> The last statement seems to be a new addition. It is too broad and again
>>> makes a reference to analysis that appears quite theoretical. It might be
>>> better to drop this (and instead talk in more specific terms about any
>>> guarantees that produced binary code matches security properties intended
>>> by the sources; I believe Richard Sandiford raised this previously).
>>
>> OK, so I actually cover this at the end of the section; Richard's point AFAICT
>> was about hardening, which I added another note for to make it explicit that
>> missed hardening does not constitute a CVE-worthy threat:
> 
> Thanks for the reminder. To illustrate what I was talking about, let me give
> two examples:
> 
> 1) safety w.r.t timing attacks: even if the source code is written in
> a manner that looks timing-safe, it might be transformed in a way that
> mounting a timing attack on the resulting machine code is possible;
> 
> 2) safety w.r.t information leaks: even if the source code attempts
> to discard sensitive data (such as passwords and keys) immediately
> after use, (partial) copies of that data may be left on stack and
> in registers, to be leaked later via a different vulnerability.
> 
> For both 1) and 2), GCC is not engineered to respect such properties
> during optimization and code generation, so it's not appropriate for such
> tasks (a possible solution is to isolate such sensitive functions to
> separate files, compile to assembly, inspect the assembly to check that it
> still has the required properties, and use the inspected asm in subsequent
> builds instead of the original high-level source).

How about this in the last section titled "Security features implemented 
in GCC", since that's where we also deal with security hardening.

     Similarly, GCC may transform code in a way that the correctness of
     the expressed algorithm is preserved but supplementary properties
     that are observable only outside the program or through a
     vulnerability in the program, may not be preserved.  This is not a
     security issue in GCC and in such cases, the vulnerability that
     caused exposure of the supplementary properties must be fixed.

Thanks,
Sid

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-16  7:53                           ` Alexander Monakov
@ 2023-08-16 13:06                             ` Paul Koning
  0 siblings, 0 replies; 72+ messages in thread
From: Paul Koning @ 2023-08-16 13:06 UTC (permalink / raw)
  To: Alexander Monakov
  Cc: David Edelsohn, Siddhesh Poyarekar, GCC Patches,
	Carlos O'Donell, richard.sandiford



> On Aug 16, 2023, at 3:53 AM, Alexander Monakov <amonakov@ispras.ru> wrote:
> 
>> ...
>> Is "timing-safety" a security property?  Not the way I understand that
>> term.  It sounds like another way to say that the code meets real time
>> constraints or requirements.
> 
> I meant in the sense of not admitting timing attacks:
> https://en.wikipedia.org/wiki/Timing_attack
> 
>> No, compilers don't help with that (at least C doesn't -- Ada might be
>> better here but I don't know enough).  For sufficiently strict
>> requirements you'd have to examine both the generated machine code and
>> understand, in gruesome detail, what the timing behaviors of the executing
>> hardware are.  Good luck if it's a modern billion-transistor machine.
> 
> Yes. On the other hand, the reality in the FOSS ecosystem is that
> cryptographic libraries heavily lean on the ability to express
> a constant-time algorithm in C and get machine code that is actually
> constant-time. There's a bit of a conflict here between what we
> can promise and what people might expect of GCC, and it seems
> relevant when discussing what goes into the Security Policy.

I agree.  What should be said is that such techniques are erroneous.  The kind of code you're talking about inserts steps not strictly needed for the calculation to make it constant time (or more nearly so).  But clearly that has to rely on an assumption that the optimizer isn't smart enough to spot those unnecessary operations and delete them.  Never mind the fact that it relies on a notion that C statements have timing properties in the first place, which the standard doesn't do.

So I would argue that a serious attempt to cure timing attacks has to be coded in assembly language.  Even then, of course, optimizations in modern machine pipelines may give you trouble, but at least in that case you're writing explicitly for a specific ISA and are in a position to take into account its timing properties, to the extent they are known and defined.

	paul



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-16 12:19                     ` Siddhesh Poyarekar
@ 2023-08-16 15:06                       ` Alexander Monakov
  2023-08-16 15:18                         ` Siddhesh Poyarekar
  0 siblings, 1 reply; 72+ messages in thread
From: Alexander Monakov @ 2023-08-16 15:06 UTC (permalink / raw)
  To: Siddhesh Poyarekar
  Cc: David Edelsohn, GCC Patches, Carlos O'Donell, richard.sandiford


On Wed, 16 Aug 2023, Siddhesh Poyarekar wrote:

> No I understood the distinction you're trying to make, I just wanted to point
> out that the effect isn't all that different.  The intent of the wording is
> not to prescribe a solution, but to describe what the compiler cannot do and
> hence, users must find a way to do this.  I think we have a consensus on this
> part of the wording though because we're not really responsible for the
> prescription here and I'm happy with just asking users to sandbox.

Nice!

> I suppose it's kinda like saying "don't try this at home".  You know many will
> and some will break their leg while others will come out of it feeling
> invincible.  Our job is to let them know that they will likely break their leg
> :)

Continuing this analogy, I was protesting against doing our job by telling
users "when trying this at home, make sure to wear vibranium shielding"
while knowing for sure that nobody can, in fact, obtain said shielding,
making our statement not helpful and rather tautological.

> How about this in the last section titled "Security features implemented in
> GCC", since that's where we also deal with security hardening.
> 
>     Similarly, GCC may transform code in a way that the correctness of
>     the expressed algorithm is preserved but supplementary properties
>     that are observable only outside the program or through a
>     vulnerability in the program, may not be preserved.  This is not a
>     security issue in GCC and in such cases, the vulnerability that
>     caused exposure of the supplementary properties must be fixed.

Yeah, indicating scenarios that fall outside of intended guarantees should
be helpful. I feel the exact text quoted above will be hard to decipher
without knowing the discussion that led to it. Some sort of supplementary
section with examples might help there.

In any case, I hope further discussion, clarification and wordsmithing
goes productively for you both here on the list and during the Cauldron.

Thanks.
Alexander

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-16 15:06                       ` Alexander Monakov
@ 2023-08-16 15:18                         ` Siddhesh Poyarekar
  2023-08-16 16:02                           ` Alexander Monakov
  0 siblings, 1 reply; 72+ messages in thread
From: Siddhesh Poyarekar @ 2023-08-16 15:18 UTC (permalink / raw)
  To: Alexander Monakov
  Cc: David Edelsohn, GCC Patches, Carlos O'Donell, richard.sandiford

On 2023-08-16 11:06, Alexander Monakov wrote:
>> No I understood the distinction you're trying to make, I just wanted to point
>> out that the effect isn't all that different.  The intent of the wording is
>> not to prescribe a solution, but to describe what the compiler cannot do and
>> hence, users must find a way to do this.  I think we have a consensus on this
>> part of the wording though because we're not really responsible for the
>> prescription here and I'm happy with just asking users to sandbox.
> 
> Nice!
> 
>> I suppose it's kinda like saying "don't try this at home".  You know many will
>> and some will break their leg while others will come out of it feeling
>> invincible.  Our job is to let them know that they will likely break their leg
>> :)
> 
> Continuing this analogy, I was protesting against doing our job by telling
> users "when trying this at home, make sure to wear vibranium shielding"
> while knowing for sure that nobody can, in fact, obtain said shielding,
> making our statement not helpful and rather tautological.

:)

>> How about this in the last section titled "Security features implemented in
>> GCC", since that's where we also deal with security hardening.
>>
>>      Similarly, GCC may transform code in a way that the correctness of
>>      the expressed algorithm is preserved but supplementary properties
>>      that are observable only outside the program or through a
>>      vulnerability in the program, may not be preserved.  This is not a
>>      security issue in GCC and in such cases, the vulnerability that
>>      caused exposure of the supplementary properties must be fixed.
> 
> Yeah, indicating scenarios that fall outside of intended guarantees should
> be helpful. I feel the exact text quoted above will be hard to decipher
> without knowing the discussion that led to it. Some sort of supplementary
> section with examples might help there.

Ah, so I had started out by listing examples but dropped them before 
emailing.  How about:

     Similarly, GCC may transform code in a way that the correctness of
     the expressed algorithm is preserved but supplementary properties
     that are observable only outside the program or through a
     vulnerability in the program, may not be preserved.  Examples
     of such supplementary properties could be the state of memory after
     it is no longer in use, performance and timing characteristics of a
     program, state of the CPU cache, etc. Such issues are not security
     vulnerabilities in GCC and in such cases, the vulnerability that
     caused exposure of the supplementary properties must be fixed.

> In any case, I hope further discussion, clarification and wordsmithing
> goes productively for you both here on the list and during the Cauldron.

Thanks!

Sid

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-16 15:18                         ` Siddhesh Poyarekar
@ 2023-08-16 16:02                           ` Alexander Monakov
  0 siblings, 0 replies; 72+ messages in thread
From: Alexander Monakov @ 2023-08-16 16:02 UTC (permalink / raw)
  To: Siddhesh Poyarekar
  Cc: David Edelsohn, GCC Patches, Carlos O'Donell, richard.sandiford


On Wed, 16 Aug 2023, Siddhesh Poyarekar wrote:

> > Yeah, indicating scenarios that fall outside of intended guarantees should
> > be helpful. I feel the exact text quoted above will be hard to decipher
> > without knowing the discussion that led to it. Some sort of supplementary
> > section with examples might help there.
> 
> Ah, so I had started out by listing examples but dropped them before emailing.
> How about:
> 
>     Similarly, GCC may transform code in a way that the correctness of
>     the expressed algorithm is preserved but supplementary properties
>     that are observable only outside the program or through a
>     vulnerability in the program, may not be preserved.  Examples
>     of such supplementary properties could be the state of memory after
>     it is no longer in use, performance and timing characteristics of a
>     program, state of the CPU cache, etc. Such issues are not security
>     vulnerabilities in GCC and in such cases, the vulnerability that
>     caused exposure of the supplementary properties must be fixed.

I would say that as follows:

	Similarly, GCC may transform code in a way that the correctness of
	the expressed algorithm is preserved, but supplementary properties
	that are not specifically expressible in a high-level language
	are not preserved. Examples of such supplementary properties
	include absence of sensitive data in the program's address space
	after an attempt to wipe it, or data-independent timing of code.
	When the source code attempts to express such properties, failure
	to preserve them in resulting machine code is not a security issue
	in GCC.

Alexander

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-07 17:29 [RFC] GCC Security policy David Edelsohn
  2023-08-08  8:16 ` Richard Biener
  2023-08-14 13:26 ` Siddhesh Poyarekar
@ 2023-09-06 11:23 ` Siddhesh Poyarekar
  2023-09-20  7:36 ` Arnaud Charlet
  3 siblings, 0 replies; 72+ messages in thread
From: Siddhesh Poyarekar @ 2023-09-06 11:23 UTC (permalink / raw)
  To: David Edelsohn, GCC Patches; +Cc: Carlos O'Donell

Hello folks,

Here's v3 of the top part of the security policy.  Hopefully this 
addresses all concerns raised so far.

Thanks,
Sid


What is a GCC security bug?
===========================

     A security bug is one that threatens the security of a system or
     network, or might compromise the security of data stored on it.
     In the context of GCC there are multiple ways in which this might
     happen and they're detailed below.

Compiler drivers, programs, libgccjit and support libraries
-----------------------------------------------------------

     The compiler driver processes source code, invokes other programs
     such as the assembler and linker and generates the output result,
     which may be assembly code or machine code.  Compiling untrusted
     sources can result in arbitrary code execution and unconstrained
     resource consumption in the compiler. As a result, compilation of
     such code should be done inside a sandboxed environment to ensure
     that it does not compromise the development environment.

     The libgccjit library can, despite the name, be used both for
     ahead-of-time compilation and for just-in-compilation.  In both
     cases it can be used to translate input representations (such as
     source code) in the application context; in the latter case the
     generated code is also run in the application context.

     Limitations that apply to the compiler driver, apply here too in
     terms of sanitizing inputs and it is recommended that both the
     compilation *and* execution context of the code are appropriately
     sandboxed to contain the effects of any bugs in libgccjit, the
     application code using it, or its generated code to the sandboxed
     environment.

     Support libraries such as libiberty, libcc1 libvtv and libcpp have
     been developed separately to share code with other tools such as
     binutils and gdb.  These libraries again have similar challenges to
     compiler drivers.  While they are expected to be robust against
     arbitrary input, they should only be used with trusted inputs.

     Libraries such as zlib that bundled into GCC to build it will be
     treated the same as the compiler drivers and programs as far as
     security coverage is concerned.  However if you find an issue in
     these libraries independent of their use in GCC, you should reach
     out to their upstream projects to report them.

     As a result, the only case for a potential security issue in the
     compiler is when it generates vulnerable application code for
     trusted input source code that is conforming to the relevant
     programming standard or extensions documented as supported by GCC
     and the algorithm expressed in the source code does not have the
     vulnerability.  The output application code could be considered
     vulnerable if it produces an actual vulnerability in the target
     application, specifically in the following cases:

     - The application dereferences an invalid memory location despite
       the application sources being valid.
     - The application reads from or writes to a valid but incorrect
       memory location, resulting in an information integrity issue or an
       information leak.
     - The application ends up running in an infinite loop or with
       severe degradation in performance despite the input sources having
       no such issue, resulting in a Denial of Service.  Note that
       correct but non-performant code is not a security issue candidate,
       this only applies to incorrect code that may result in performance
       degradation severe enough to amount to a denial of service.
     - The application crashes due to the generated incorrect code,
       resulting in a Denial of Service.

Language runtime libraries
--------------------------

     GCC also builds and distributes libraries that are intended to be
     used widely to implement runtime support for various programming
     languages.  These include the following:

     * libada
     * libatomic
     * libbacktrace
     * libcc1
     * libcody
     * libcpp
     * libdecnumber
     * libffi
     * libgcc
     * libgfortran
     * libgm2
     * libgo
     * libgomp
     * libiberty
     * libitm
     * libobjc
     * libphobos
     * libquadmath
     * libsanitizer
     * libssp
     * libstdc++

     These libraries are intended to be used in arbitrary contexts and as
     a result, bugs in these libraries may be evaluated for security
     impact.  However, some of these libraries, e.g. libgo, libphobos,
     etc.  are not maintained in the GCC project, due to which the GCC
     project may not be the correct point of contact for them.  You are
     encouraged to look at README files within those library directories
     to locate the canonical security contact point for those projects
     and include them in the report.  Once the issue is fixed in the
     upstream project, the fix will be synced into GCC in a future
     release.

     Most security vulnerabilities in these runtime libraries arise when
     an application uses functionality in a specific way.  As a result,
     not all bugs qualify as security relevant.  The following guidelines
     can help with the decision:

     - Buffer overflows and integer overflows should be treated as
       security issues if it is conceivable that the data triggering them
       can come from an untrusted source.
     - Bugs that cause memory corruption which is likely exploitable
       should be treated as security bugs.
     - Information disclosure can be security bugs, especially if
       exposure through applications can be determined.
     - Memory leaks and races are security bugs if they cause service
       breakage.
     - Stack overflow through unbounded alloca calls or variable-length
       arrays are security bugs if it is conceivable that the data
       triggering the overflow could come from an untrusted source.
     - Stack overflow through deep recursion and other crashes are
       security bugs if they cause service breakage.
     - Bugs that cripple the whole system (so that it doesn't even boot
       or does not run most applications) are not security bugs because
       they will not be exploitable in practice, due to general system
       instability.

Diagnostic libraries
--------------------

     The sanitizer library bundled in GCC is intended to be used in
     diagnostic cases and not intended for use in sensitive environments.
     As a result, bugs in the sanitizer will not be considered security
     sensitive.

GCC plugins
-----------

     It should be noted that GCC may execute arbitrary code loaded by a
     user through the GCC plugin mechanism or through system preloading
     mechanism.  Such custom code should be vetted by the user for safety
     as bugs exposed through such code will not be considered security
     issues.

Security features implemented in GCC
------------------------------------

     GCC implements a number of security features that reduce the impact
     of security issues in applications, such as -fstack-protector,
     -fstack-clash-protection, _FORTIFY_SOURCE and so on.  A failure in
     these features functioning perfectly in all situations is not a
     security issue in itself since they're dependent on heuristics and
     may not always have full coverage for protection.

     Similarly, GCC may transform code in a way that the correctness of
     the expressed algorithm is preserved, but supplementary properties
     that are not specifically expressible in a high-level language
     are not preserved. Examples of such supplementary properties
     include absence of sensitive data in the program's address space
     after an attempt to wipe it, or data-independent timing of code.
     When the source code attempts to express such properties, failure
     to preserve them in resulting machine code is not a security issue
     in GCC.

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-07 17:29 [RFC] GCC Security policy David Edelsohn
                   ` (2 preceding siblings ...)
  2023-09-06 11:23 ` Siddhesh Poyarekar
@ 2023-09-20  7:36 ` Arnaud Charlet
  3 siblings, 0 replies; 72+ messages in thread
From: Arnaud Charlet @ 2023-09-20  7:36 UTC (permalink / raw)
  To: David Edelsohn
  Cc: GCC Patches, Siddhesh Poyarekar, Carlos O'Donell,
	Frederic Leger, Arnaud Charlet

[-- Attachment #1: Type: text/plain, Size: 3698 bytes --]

This is a great initiative I think.

See reference to AdaCore's security email below (among Debian, Red Hat,
SUSE)

On Mon, Aug 7, 2023 at 7:30 PM David Edelsohn via Gcc-patches <
gcc-patches@gcc.gnu.org> wrote:

> FOSS Best Practices recommends that projects have an official Security
> policy stated in a SECURITY.md or SECURITY.txt file at the root of the
> repository.  GLIBC and Binutils have added such documents.
>
> Appended is a prototype for a Security policy file for GCC based on the
> Binutils document because GCC seems to have more affinity with Binutils as
> a tool. Do the runtime libraries distributed with GCC, especially libgcc,
> require additional security policies?
>
> [ ] Is it appropriate to use the Binutils SECURITY.txt as the starting
> point or should GCC use GLIBC SECURITY.md as the starting point for the GCC
> Security policy?
>
> [ ] Does GCC, or some components of GCC, require additional care because of
> runtime libraries like libgcc and libstdc++, and because of gcov and
> profile-directed feedback?
>
> Thoughts?
>
> Thanks, David
>
> GCC Security Process
> ====================
>
> What is a GCC security bug?
> ===========================
>
>     A security bug is one that threatens the security of a system or
>     network, or might compromise the security of data stored on it.
>     In the context of GCC there are two ways in which such
>     bugs might occur.  In the first, the programs themselves might be
>     tricked into a direct compromise of security.  In the second, the
>     tools might introduce a vulnerability in the generated output that
>     was not already present in the files used as input.
>
>     Other than that, all other bugs will be treated as non-security
>     issues.  This does not mean that they will be ignored, just that
>     they will not be given the priority that is given to security bugs.
>
>     This stance applies to the creation tools in the GCC (e.g.,
>     gcc, g++, gfortran, gccgo, gccrs, gnat, cpp, gcov, etc.) and the
>     libraries that they use.
>
> Notes:
> ======
>
>     None of the programs in GCC need elevated privileges to operate and
>     it is recommended that users do not use them from accounts where such
>     privileges are automatically available.
>
> Reporting private security bugs
> ========================
>
>    *All bugs reported in the GCC Bugzilla are public.*
>
>    In order to report a private security bug that is not immediately
>    public, please contact one of the downstream distributions with
>    security teams.  The following teams have volunteered to handle
>    such bugs:
>
>       Debian:  security@debian.org
>       Red Hat: secalert@redhat.com
>       SUSE:    security@suse.de


Can you also please add:

AdaCore:  product-security@adacore.com


>
>    Please report the bug to just one of these teams.  It will be shared
>    with other teams as necessary.
>
>    The team contacted will take care of details such as vulnerability
>    rating and CVE assignment (http://cve.mitre.org/about/).  It is likely
>    that the team will ask to file a public bug because the issue is
>    sufficiently minor and does not warrant an embargo.  An embargo is not
>    a requirement for being credited with the discovery of a security
>    vulnerability.
>
> Reporting public security bugs
> ==============================
>
>    It is expected that critical security bugs will be rare, and that most
>    security bugs can be reported in GCC, thus making
>    them public immediately.  The system can be found here:
>
>       https://gcc.gnu.org/bugzilla/
>

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2023-08-08 12:52     ` Richard Biener
  2023-08-08 13:01       ` Jakub Jelinek
@ 2024-02-09 15:38       ` Martin Jambor
  2024-02-09 15:55         ` Siddhesh Poyarekar
  1 sibling, 1 reply; 72+ messages in thread
From: Martin Jambor @ 2024-02-09 15:38 UTC (permalink / raw)
  To: Richard Biener, Siddhesh Poyarekar
  Cc: David Edelsohn, GCC Patches, Carlos O'Donell

Hi,

On Tue, Aug 08 2023, Richard Biener via Gcc-patches wrote:
> On Tue, Aug 8, 2023 at 2:33 PM Siddhesh Poyarekar <siddhesh@gotplt.org> wrote:
>>
>> On 2023-08-08 04:16, Richard Biener wrote:
>> > On Mon, Aug 7, 2023 at 7:30 PM David Edelsohn via Gcc-patches
>> > <gcc-patches@gcc.gnu.org> wrote:
>> >>
>> >> FOSS Best Practices recommends that projects have an official Security
>> >> policy stated in a SECURITY.md or SECURITY.txt file at the root of the
>> >> repository.  GLIBC and Binutils have added such documents.
>> >>
>> >> Appended is a prototype for a Security policy file for GCC based on the
>> >> Binutils document because GCC seems to have more affinity with Binutils as
>> >> a tool. Do the runtime libraries distributed with GCC, especially libgcc,
>> >> require additional security policies?
>> >>
>> >> [ ] Is it appropriate to use the Binutils SECURITY.txt as the starting
>> >> point or should GCC use GLIBC SECURITY.md as the starting point for the GCC
>> >> Security policy?
>> >>
>> >> [ ] Does GCC, or some components of GCC, require additional care because of
>> >> runtime libraries like libgcc and libstdc++, and because of gcov and
>> >> profile-directed feedback?
>> >
>> > I do think that the runtime libraries should at least be explicitly mentioned
>> > because they fall into the "generated output" category and bugs in the
>> > runtime are usually more severe as affecting a wider class of inputs.
>>
>> Ack, I'd expect libstdc++ and libgcc to be aligned with glibc's
>> policies.  libiberty and others on the other hand, would probably be
>> more suitably aligned with binutils libbfd, where we assume trusted input.
>>
>> >> Thoughts?
>> >>
>> >> Thanks, David
>> >>
>> >> GCC Security Process
>> >> ====================
>> >>
>> >> What is a GCC security bug?
>> >> ===========================
>> >>
>> >>      A security bug is one that threatens the security of a system or
>> >>      network, or might compromise the security of data stored on it.
>> >>      In the context of GCC there are two ways in which such
>> >>      bugs might occur.  In the first, the programs themselves might be
>> >>      tricked into a direct compromise of security.  In the second, the
>> >>      tools might introduce a vulnerability in the generated output that
>> >>      was not already present in the files used as input.
>> >>
>> >>      Other than that, all other bugs will be treated as non-security
>> >>      issues.  This does not mean that they will be ignored, just that
>> >>      they will not be given the priority that is given to security bugs.
>> >>
>> >>      This stance applies to the creation tools in the GCC (e.g.,
>> >>      gcc, g++, gfortran, gccgo, gccrs, gnat, cpp, gcov, etc.) and the
>> >>      libraries that they use.
>> >>
>> >> Notes:
>> >> ======
>> >>
>> >>      None of the programs in GCC need elevated privileges to operate and
>> >>      it is recommended that users do not use them from accounts where such
>> >>      privileges are automatically available.
>> >
>> > I'll note that we could ourselves mitigate some of that by handling privileged
>> > invocation of the driver specially, dropping privs on exec of the sibling tools
>> > and possibly using temporary files or pipes to do the parts of the I/O that
>> > need to be privileged.
>>
>> It's not a bad idea, but it ends up giving legitimizing running the
>> compiler as root, pushing the responsibility of privilege management to
>> the driver.  How about rejecting invocation as root altogether by
>> default, bypassed with a --run-as-root flag instead?
>>
>> I've also been thinking about a --sandbox flag that isolates the build
>> process (for gcc as well as binutils) into a separate namespace so that
>> it's usable in a restricted mode on untrusted sources without exposing
>> the rest of the system to it.
>
> There's probably external tools to do this, not sure if we should replicate
> things in the driver for this.
>
> But sure, I think the driver is the proper point to address any of such
> issues - iff we want to address them at all.  Maybe a nice little
> google summer-of-code project ;)
>

If anyone is interested in scoping this and then mentoring this as a
Google Summer of Code project this year then now is the right time to
speak up!

Thanks,

Martin

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2024-02-09 15:38       ` Martin Jambor
@ 2024-02-09 15:55         ` Siddhesh Poyarekar
  2024-02-09 17:14           ` Joseph Myers
  2024-02-12 13:16           ` Martin Jambor
  0 siblings, 2 replies; 72+ messages in thread
From: Siddhesh Poyarekar @ 2024-02-09 15:55 UTC (permalink / raw)
  To: Martin Jambor, Richard Biener
  Cc: David Edelsohn, GCC Patches, Carlos O'Donell

On 2024-02-09 10:38, Martin Jambor wrote:
> If anyone is interested in scoping this and then mentoring this as a
> Google Summer of Code project this year then now is the right time to
> speak up!

I can help with mentoring and reviews, although I'll need someone to 
assist with actual approvals.

There are two distinct sets of ideas to explore, one is privilege 
management and the other sandboxing.

For privilege management we could add a --allow-root driver flag that 
allows gcc to run as root.  Without the flag one could either outright 
refuse to run or drop privileges and run.  Dropping privileges will be a 
bit tricky to implement because it would need a user to drop privileges 
to and then there would be the question of how to manage file access to 
read the compiler input and write out the compiler output.  If there's 
no such user, gcc could refuse to run as root by default.  I wonder 
though if from a security posture perspective it makes sense to simply 
discourage running as root all the time and not bother trying to make it 
work with dropped privileges and all that.  Of course it would mean that 
this would be less of a "project"; it'll be a simple enough patch to 
refuse to run until --allow-root is specified.

This probably ties in somewhat with an idea David Malcolm had riffed on 
with me earlier, of caching files for diagnostics.  If we could unify 
file accesses somehow, we could make this happen, i.e. open/read files 
as root and then do all execution as non-root.

Sandboxing will have similar requirements, i.e. map in input files and 
an output file handle upfront and then unshare() into a sandbox to do 
the actual compilation.  This will make sure that at least the 
processing of inputs does not affect the system on which the compilation 
is being run.

Sid

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2024-02-09 15:55         ` Siddhesh Poyarekar
@ 2024-02-09 17:14           ` Joseph Myers
  2024-02-09 17:39             ` Siddhesh Poyarekar
  2024-02-12 13:16           ` Martin Jambor
  1 sibling, 1 reply; 72+ messages in thread
From: Joseph Myers @ 2024-02-09 17:14 UTC (permalink / raw)
  To: Siddhesh Poyarekar
  Cc: Martin Jambor, Richard Biener, David Edelsohn, GCC Patches,
	Carlos O'Donell

On Fri, 9 Feb 2024, Siddhesh Poyarekar wrote:

> For privilege management we could add a --allow-root driver flag that allows
> gcc to run as root.  Without the flag one could either outright refuse to run
> or drop privileges and run.  Dropping privileges will be a bit tricky to
> implement because it would need a user to drop privileges to and then there
> would be the question of how to manage file access to read the compiler input
> and write out the compiler output.  If there's no such user, gcc could refuse
> to run as root by default.  I wonder though if from a security posture
> perspective it makes sense to simply discourage running as root all the time
> and not bother trying to make it work with dropped privileges and all that.
> Of course it would mean that this would be less of a "project"; it'll be a
> simple enough patch to refuse to run until --allow-root is specified.

I think disallowing running as root would be a big problem in practice - 
the typical problem case is when people build software as non-root and run 
"make install" as root, and for some reason "make install" wants to 
(re)build or (re)link something.

-- 
Joseph S. Myers
josmyers@redhat.com


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2024-02-09 17:14           ` Joseph Myers
@ 2024-02-09 17:39             ` Siddhesh Poyarekar
  2024-02-09 20:06               ` Joseph Myers
  0 siblings, 1 reply; 72+ messages in thread
From: Siddhesh Poyarekar @ 2024-02-09 17:39 UTC (permalink / raw)
  To: Joseph Myers
  Cc: Martin Jambor, Richard Biener, David Edelsohn, GCC Patches,
	Carlos O'Donell

On 2024-02-09 12:14, Joseph Myers wrote:
> On Fri, 9 Feb 2024, Siddhesh Poyarekar wrote:
> 
>> For privilege management we could add a --allow-root driver flag that allows
>> gcc to run as root.  Without the flag one could either outright refuse to run
>> or drop privileges and run.  Dropping privileges will be a bit tricky to
>> implement because it would need a user to drop privileges to and then there
>> would be the question of how to manage file access to read the compiler input
>> and write out the compiler output.  If there's no such user, gcc could refuse
>> to run as root by default.  I wonder though if from a security posture
>> perspective it makes sense to simply discourage running as root all the time
>> and not bother trying to make it work with dropped privileges and all that.
>> Of course it would mean that this would be less of a "project"; it'll be a
>> simple enough patch to refuse to run until --allow-root is specified.
> 
> I think disallowing running as root would be a big problem in practice -
> the typical problem case is when people build software as non-root and run
> "make install" as root, and for some reason "make install" wants to
> (re)build or (re)link something.

Isn't that a problematic practice though?  Or maybe have those 
invocations be separated out as CC_ROOT?

Thanks,
Sid

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2024-02-09 17:39             ` Siddhesh Poyarekar
@ 2024-02-09 20:06               ` Joseph Myers
  2024-02-12 13:32                 ` Siddhesh Poyarekar
  0 siblings, 1 reply; 72+ messages in thread
From: Joseph Myers @ 2024-02-09 20:06 UTC (permalink / raw)
  To: Siddhesh Poyarekar
  Cc: Martin Jambor, Richard Biener, David Edelsohn, GCC Patches,
	Carlos O'Donell

On Fri, 9 Feb 2024, Siddhesh Poyarekar wrote:

> > I think disallowing running as root would be a big problem in practice -
> > the typical problem case is when people build software as non-root and run
> > "make install" as root, and for some reason "make install" wants to
> > (re)build or (re)link something.
> 
> Isn't that a problematic practice though?  Or maybe have those invocations be
> separated out as CC_ROOT?

Ideally dependencies would be properly set up so that everything is built 
in the original build, and ideally there would be no need to relink at 
install time (I'm not sure of the exact circumstances in which it might be 
needed, or on what OSes to e.g. encode the right library paths in final 
installed executables).  In practice I think it's common for some building 
to take place at install time.

There is a more general principle here of composability: it's not helpful 
for being able to write scripts or makefiles combining invocations of 
different utilities and have them behave predictably if some of those 
utilities start making judgements about whether it's a good idea to run 
them in a particular environment rather than just doing their job 
independent of irrelevant aspects of the environment.  The semantics of 
invoking "gcc" have nothing to do with whether it's run as root; it should 
never need to look up what user it's running as at all.  (And it's 
probably also a bad idea for lots of separate utilities to gain their own 
ways to run in a restricted environment, for similar reasons; rather than 
teaching "gcc" a way to create a restricted environment itself, ensure 
there are easy-to-use more general utilities for running arbitrary 
programs on untrusted input in a contained environment.)

-- 
Joseph S. Myers
josmyers@redhat.com


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2024-02-09 15:55         ` Siddhesh Poyarekar
  2024-02-09 17:14           ` Joseph Myers
@ 2024-02-12 13:16           ` Martin Jambor
  2024-02-12 13:35             ` Siddhesh Poyarekar
  1 sibling, 1 reply; 72+ messages in thread
From: Martin Jambor @ 2024-02-12 13:16 UTC (permalink / raw)
  To: Siddhesh Poyarekar
  Cc: David Edelsohn, GCC Patches, Carlos O'Donell, Richard Biener

Hi,

On Fri, Feb 09 2024, Siddhesh Poyarekar wrote:
> On 2024-02-09 10:38, Martin Jambor wrote:
>> If anyone is interested in scoping this and then mentoring this as a
>> Google Summer of Code project this year then now is the right time to
>> speak up!
>
> I can help with mentoring and reviews, although I'll need someone to 
> assist with actual approvals.

I'm sure that we could manage that.  The project does not look like it
would be a huge one.

>
> There are two distinct sets of ideas to explore, one is privilege 
> management and the other sandboxing.
>
> For privilege management we could add a --allow-root driver flag that 
> allows gcc to run as root.  Without the flag one could either outright 
> refuse to run or drop privileges and run.  Dropping privileges will be a 
> bit tricky to implement because it would need a user to drop privileges 
> to and then there would be the question of how to manage file access to 
> read the compiler input and write out the compiler output.  If there's 
> no such user, gcc could refuse to run as root by default.  I wonder 
> though if from a security posture perspective it makes sense to simply 
> discourage running as root all the time and not bother trying to make it 
> work with dropped privileges and all that.  Of course it would mean that 
> this would be less of a "project"; it'll be a simple enough patch to 
> refuse to run until --allow-root is specified.

Yeah, this would not be enough for a GSoC project, not even for their
new small project category.

Additionally, I think that many, if not all, Linux distributions that
build binary packages do it in a VM/container/chroot where they do it
simply under root because the whole environment is there just for the
build.  So this would complicate lives for an important set of our
users.

>
> This probably ties in somewhat with an idea David Malcolm had riffed on 
> with me earlier, of caching files for diagnostics.  If we could unify 
> file accesses somehow, we could make this happen, i.e. open/read files 
> as root and then do all execution as non-root.
>
> Sandboxing will have similar requirements, i.e. map in input files and 
> an output file handle upfront and then unshare() into a sandbox to do 
> the actual compilation.  This will make sure that at least the 
> processing of inputs does not affect the system on which the compilation 
> is being run.

Right.  As we often just download some (sometimes large) pre-processed
source from Bugzilla and then happily run GCC on it on our computers,
this feature might be actually useful for us (still, we'd probably need
a more concrete description of what we want, would e.g. using "-wrapper
gdb,--args" work in such a sandbox?).  I agree that for some even
semi-complex builds, a more general sandboxing solution is probably
better.

Martin

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2024-02-09 20:06               ` Joseph Myers
@ 2024-02-12 13:32                 ` Siddhesh Poyarekar
  0 siblings, 0 replies; 72+ messages in thread
From: Siddhesh Poyarekar @ 2024-02-12 13:32 UTC (permalink / raw)
  To: Joseph Myers
  Cc: Martin Jambor, Richard Biener, David Edelsohn, GCC Patches,
	Carlos O'Donell

On 2024-02-09 15:06, Joseph Myers wrote:
> Ideally dependencies would be properly set up so that everything is built
> in the original build, and ideally there would be no need to relink at
> install time (I'm not sure of the exact circumstances in which it might be
> needed, or on what OSes to e.g. encode the right library paths in final
> installed executables).  In practice I think it's common for some building
> to take place at install time.
> 
> There is a more general principle here of composability: it's not helpful
> for being able to write scripts or makefiles combining invocations of
> different utilities and have them behave predictably if some of those
> utilities start making judgements about whether it's a good idea to run
> them in a particular environment rather than just doing their job
> independent of irrelevant aspects of the environment.  The semantics of
> invoking "gcc" have nothing to do with whether it's run as root; it should
> never need to look up what user it's running as at all.  (And it's
> probably also a bad idea for lots of separate utilities to gain their own
> ways to run in a restricted environment, for similar reasons; rather than
> teaching "gcc" a way to create a restricted environment itself, ensure
> there are easy-to-use more general utilities for running arbitrary
> programs on untrusted input in a contained environment.)

I see your point.  The way you put it, there's no GCC project here at 
all then.

Sid

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2024-02-12 13:16           ` Martin Jambor
@ 2024-02-12 13:35             ` Siddhesh Poyarekar
  2024-02-12 15:00               ` Richard Biener
  0 siblings, 1 reply; 72+ messages in thread
From: Siddhesh Poyarekar @ 2024-02-12 13:35 UTC (permalink / raw)
  To: Martin Jambor
  Cc: David Edelsohn, GCC Patches, Carlos O'Donell, Richard Biener

On 2024-02-12 08:16, Martin Jambor wrote:
>> This probably ties in somewhat with an idea David Malcolm had riffed on
>> with me earlier, of caching files for diagnostics.  If we could unify
>> file accesses somehow, we could make this happen, i.e. open/read files
>> as root and then do all execution as non-root.
>>
>> Sandboxing will have similar requirements, i.e. map in input files and
>> an output file handle upfront and then unshare() into a sandbox to do
>> the actual compilation.  This will make sure that at least the
>> processing of inputs does not affect the system on which the compilation
>> is being run.
> 
> Right.  As we often just download some (sometimes large) pre-processed
> source from Bugzilla and then happily run GCC on it on our computers,
> this feature might be actually useful for us (still, we'd probably need
> a more concrete description of what we want, would e.g. using "-wrapper
> gdb,--args" work in such a sandbox?).  I agree that for some even
> semi-complex builds, a more general sandboxing solution is probably
> better.

Joseph seems to be leaning towards nudging people to a general 
sandboxing solution too.  The question then is whether this takes the 
shape of a utility in, e.g. contrib that builds such a sandbox or simply 
a wiki page.

Thanks,
Sid

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2024-02-12 13:35             ` Siddhesh Poyarekar
@ 2024-02-12 15:00               ` Richard Biener
  2024-02-13 12:34                 ` Siddhesh Poyarekar
  0 siblings, 1 reply; 72+ messages in thread
From: Richard Biener @ 2024-02-12 15:00 UTC (permalink / raw)
  To: Siddhesh Poyarekar
  Cc: Martin Jambor, David Edelsohn, GCC Patches, Carlos O'Donell

On Mon, Feb 12, 2024 at 2:35 PM Siddhesh Poyarekar <siddhesh@gotplt.org> wrote:
>
> On 2024-02-12 08:16, Martin Jambor wrote:
> >> This probably ties in somewhat with an idea David Malcolm had riffed on
> >> with me earlier, of caching files for diagnostics.  If we could unify
> >> file accesses somehow, we could make this happen, i.e. open/read files
> >> as root and then do all execution as non-root.
> >>
> >> Sandboxing will have similar requirements, i.e. map in input files and
> >> an output file handle upfront and then unshare() into a sandbox to do
> >> the actual compilation.  This will make sure that at least the
> >> processing of inputs does not affect the system on which the compilation
> >> is being run.
> >
> > Right.  As we often just download some (sometimes large) pre-processed
> > source from Bugzilla and then happily run GCC on it on our computers,
> > this feature might be actually useful for us (still, we'd probably need
> > a more concrete description of what we want, would e.g. using "-wrapper
> > gdb,--args" work in such a sandbox?).  I agree that for some even
> > semi-complex builds, a more general sandboxing solution is probably
> > better.
>
> Joseph seems to be leaning towards nudging people to a general
> sandboxing solution too.  The question then is whether this takes the
> shape of a utility in, e.g. contrib that builds such a sandbox or simply
> a wiki page.

GCC driver support is then to the extent identifying the inputs and the outputs.
I'm not sure a generic utility can achieve this unless the outputs need to be
retrieved from somewhere else (not "usual" place when invoking un-sandboxed).

Even the driver doesn't necessarily know all files read/written.

So I suppose better defining of the actual goal is in order.

> gcc -sandboxed -O2 -c t.ii -fdump-tree-all

what should this do?  IMO invoked tools (gas, cc1plus) should be restricted
to access the input files.  Ideally the dumps would appear where they
appear when not sandboxed but clearly overwriting existing files would be
problematic, writing new files not so much, but only to the standard (or
specified) auxiliary output file paths.

Richard.

> Thanks,
> Sid

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [RFC] GCC Security policy
  2024-02-12 15:00               ` Richard Biener
@ 2024-02-13 12:34                 ` Siddhesh Poyarekar
  0 siblings, 0 replies; 72+ messages in thread
From: Siddhesh Poyarekar @ 2024-02-13 12:34 UTC (permalink / raw)
  To: Richard Biener
  Cc: Martin Jambor, David Edelsohn, GCC Patches, Carlos O'Donell

On 2024-02-12 10:00, Richard Biener wrote:
> GCC driver support is then to the extent identifying the inputs and the outputs.

We already have -MM to generate a list of non-system dependencies, so 
gcc should be able to pass on inputs to the tool, which could then map 
those files (and the system headers directory) into the sandbox before 
invocation.  The output file could perhaps be enforced as having to be a 
new one, i.e. fail if the target is already found.

> I'm not sure a generic utility can achieve this unless the outputs need to be
> retrieved from somewhere else (not "usual" place when invoking un-sandboxed).
> 
> Even the driver doesn't necessarily know all files read/written.
> 
> So I suppose better defining of the actual goal is in order.
> 
>> gcc -sandboxed -O2 -c t.ii -fdump-tree-all
> 
> what should this do?  IMO invoked tools (gas, cc1plus) should be restricted
> to access the input files.  Ideally the dumps would appear where they
> appear when not sandboxed but clearly overwriting existing files would be
> problematic, writing new files not so much, but only to the standard (or
> specified) auxiliary output file paths.

Couldn't we get away with not having to handle dump files?  They don't 
seem to be sensitive targets.

Thanks,
Sid

^ permalink raw reply	[flat|nested] 72+ messages in thread

end of thread, other threads:[~2024-02-13 12:35 UTC | newest]

Thread overview: 72+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-07 17:29 [RFC] GCC Security policy David Edelsohn
2023-08-08  8:16 ` Richard Biener
2023-08-08 12:33   ` Siddhesh Poyarekar
2023-08-08 12:52     ` Richard Biener
2023-08-08 13:01       ` Jakub Jelinek
2023-08-08 13:21         ` Richard Biener
2023-08-08 13:24         ` Michael Matz
2023-08-08 13:33         ` Paul Koning
2023-08-08 15:48           ` David Malcolm
2023-08-08 15:55             ` Siddhesh Poyarekar
2023-08-08 16:35               ` Paul Koning
2023-08-08 20:02             ` Joseph Myers
2023-08-08 13:34         ` Ian Lance Taylor
2023-08-08 14:04           ` Richard Biener
2023-08-08 14:06             ` Siddhesh Poyarekar
2023-08-08 14:14               ` David Edelsohn
2023-08-08 14:30                 ` Siddhesh Poyarekar
2023-08-08 14:37                   ` Jakub Jelinek
2023-08-08 14:40                     ` Siddhesh Poyarekar
2023-08-08 16:22                       ` Richard Earnshaw (lists)
2023-08-08 17:35                     ` Ian Lance Taylor
2023-08-08 17:46                       ` David Edelsohn
2023-08-08 19:39                         ` Carlos O'Donell
2023-08-09 13:25                           ` Richard Earnshaw (lists)
2023-08-09 17:32                   ` Siddhesh Poyarekar
2023-08-09 18:17                     ` David Edelsohn
2023-08-09 20:12                       ` Siddhesh Poyarekar
2023-08-10 18:28                     ` Richard Sandiford
2023-08-10 18:50                       ` Siddhesh Poyarekar
2023-08-11 14:36                         ` Siddhesh Poyarekar
2023-08-11 15:09                           ` Paul Koning
2023-08-11 15:20                             ` Siddhesh Poyarekar
2023-08-10 19:27                       ` Richard Biener
2023-08-11 15:12                     ` David Edelsohn
2023-08-11 15:22                       ` Siddhesh Poyarekar
2024-02-09 15:38       ` Martin Jambor
2024-02-09 15:55         ` Siddhesh Poyarekar
2024-02-09 17:14           ` Joseph Myers
2024-02-09 17:39             ` Siddhesh Poyarekar
2024-02-09 20:06               ` Joseph Myers
2024-02-12 13:32                 ` Siddhesh Poyarekar
2024-02-12 13:16           ` Martin Jambor
2024-02-12 13:35             ` Siddhesh Poyarekar
2024-02-12 15:00               ` Richard Biener
2024-02-13 12:34                 ` Siddhesh Poyarekar
2023-08-14 13:26 ` Siddhesh Poyarekar
2023-08-14 18:51   ` Richard Sandiford
2023-08-14 19:31     ` Siddhesh Poyarekar
2023-08-14 21:16       ` Alexander Monakov
2023-08-14 21:50         ` Siddhesh Poyarekar
2023-08-15  5:59           ` Alexander Monakov
2023-08-15 10:33             ` Siddhesh Poyarekar
2023-08-15 14:07               ` Alexander Monakov
2023-08-15 14:54                 ` Paul Koning
2023-08-15 19:13                 ` Siddhesh Poyarekar
2023-08-15 23:07                   ` Alexander Monakov
2023-08-15 23:45                     ` David Edelsohn
2023-08-16  0:37                       ` Alexander Monakov
2023-08-16  0:50                         ` Paul Koning
2023-08-16  7:53                           ` Alexander Monakov
2023-08-16 13:06                             ` Paul Koning
2023-08-16  9:05                     ` Toon Moene
2023-08-16 12:19                     ` Siddhesh Poyarekar
2023-08-16 15:06                       ` Alexander Monakov
2023-08-16 15:18                         ` Siddhesh Poyarekar
2023-08-16 16:02                           ` Alexander Monakov
2023-08-15 23:45   ` David Malcolm
2023-08-16  8:25     ` Alexander Monakov
2023-08-16 11:39       ` Siddhesh Poyarekar
2023-08-16 11:50         ` Alexander Monakov
2023-09-06 11:23 ` Siddhesh Poyarekar
2023-09-20  7:36 ` Arnaud Charlet

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).