public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Jakub Jelinek <jakub@redhat.com>
To: Jeff Law <law@redhat.com>
Cc: Joseph Myers <joseph@codesourcery.com>,
	       gcc-patches <gcc-patches@gcc.gnu.org>
Subject: Re: RFC: stack/heap collision vulnerability and mitigation with GCC
Date: Mon, 19 Jun 2017 19:45:00 -0000	[thread overview]
Message-ID: <20170619194541.GB2123@tucnak> (raw)
In-Reply-To: <c36f6eb2-2478-c901-9390-bc8492aee09b@redhat.com>

On Mon, Jun 19, 2017 at 01:04:57PM -0600, Jeff Law wrote:
> On 06/19/2017 11:50 AM, Joseph Myers wrote:
> > On Mon, 19 Jun 2017, Jeff Law wrote:
> > 
> >> A key point to remember is that you can never have an allocation
> >> (potentially using more than one allocation site) which is larger than a
> >> page without probing the page.
> > 
> > There's a platform ABI issue here.  At least some kernel fixes for these 
> > stack issues, as I understand it, increase the size of the stack guard to 
> > more than a single page.  It would be possible to define the ABI to 
> > require such a larger guard for protection and so reduce the number of 
> > (non-alloca/VLA-using) functions that need probes generated, depending on 
> > whether a goal is to achieve security on kernels without such a fix.  
> > (Thinking in terms of how to get to enabling such probes by default.)
> On 32 bit platforms we don't have a lot of address space left, so we
> have to be careful about creating too large of a guard.
> 
> On 64 bit platforms we have a lot more freedom and I suspect larger
> guards, mandated by the ABI would be useful, if for no other reason than
> allowing us to allocate more stack without probing.   A simple array of
> PATH_MAX characters triggers probing right now.   I suspect (but didn't
> bother to confirm) that PATH_MAX array are what causes git to have so
> many large stacks.
> 
> Also if we look at something like ppc and aarch64, we've currently got
> the PROBE_INTERVAL set to 4k.  But in reality they're using much larger
> page sizes.  So we could improve things there as well.

ppc can use 4K, 16K, 64K or 256K pages, aarch64 4K, 16K or 64K.
So, unless the ABI (or some ABI extension for Linux) says that the guard
page is at least 16K or 64K on these arches (and unless glibc changes the
default pthread_attr_getguardsize - currently defaults everywhere to 1
page), you can't rely on more than 4K there.

	Jakub

  reply	other threads:[~2017-06-19 19:45 UTC|newest]

Thread overview: 66+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-19 17:07 Jeff Law
2017-06-19 17:29 ` Jakub Jelinek
2017-06-19 17:45   ` Jeff Law
2017-06-19 17:51     ` Jakub Jelinek
2017-06-19 21:51       ` Jeff Law
2017-06-20  8:03       ` Uros Bizjak
2017-06-20 10:18         ` Richard Biener
2017-06-20 11:10           ` Uros Bizjak
2017-06-20 12:13             ` Florian Weimer
2017-06-20 12:17               ` Uros Bizjak
2017-06-20 12:20                 ` Uros Bizjak
2017-06-20 12:27                   ` Richard Biener
2017-06-20 21:57                     ` Jeff Law
2017-06-20 15:59                 ` Jeff Law
2017-06-19 18:00   ` Richard Biener
2017-06-19 18:02     ` Richard Biener
2017-06-19 18:15       ` Florian Weimer
2017-06-19 21:57         ` Jeff Law
2017-06-19 22:08       ` Jeff Law
2017-06-20  7:50   ` Eric Botcazou
2017-06-19 17:51 ` Joseph Myers
2017-06-19 17:55   ` Jakub Jelinek
2017-06-19 18:21   ` Florian Weimer
2017-06-19 21:56     ` Joseph Myers
2017-06-19 22:05       ` Jeff Law
2017-06-19 22:10         ` Florian Weimer
2017-06-19 19:05   ` Jeff Law
2017-06-19 19:45     ` Jakub Jelinek [this message]
2017-06-19 21:41       ` Jeff Law
2017-06-20  8:27     ` Richard Earnshaw (lists)
2017-06-20 15:50       ` Jeff Law
2017-06-19 18:12 ` Richard Kenner
2017-06-19 22:05   ` Jeff Law
2017-06-19 22:07     ` Richard Kenner
2017-06-20  8:21   ` Eric Botcazou
2017-06-20 15:50     ` Jeff Law
2017-06-20 19:48     ` Jakub Jelinek
2017-06-20 20:37       ` Eric Botcazou
2017-06-20 20:46         ` Jeff Law
2017-06-20  8:17 ` Eric Botcazou
2017-06-20 21:52   ` Jeff Law
2017-06-20 22:20     ` Eric Botcazou
2017-06-21 17:31       ` Jeff Law
2017-06-21 19:07     ` Florian Weimer
2017-06-21  7:56   ` Andreas Schwab
2017-06-20  9:27 ` Richard Earnshaw (lists)
2017-06-20 21:39   ` Jeff Law
2017-06-21  8:41     ` Richard Earnshaw (lists)
2017-06-21 17:25       ` Jeff Law
2017-06-22  9:53         ` Richard Earnshaw (lists)
2017-06-22 15:30           ` Jeff Law
2017-06-22 16:07             ` Szabolcs Nagy
2017-06-22 16:15               ` Jeff Law
2017-06-28  6:45           ` Florian Weimer
2017-07-13 23:21             ` Jeff Law
2017-07-18 19:54               ` Florian Weimer
2017-06-20 23:22 Wilco Dijkstra
2017-06-21  8:34 ` Richard Earnshaw (lists)
2017-06-21  8:44   ` Andreas Schwab
2017-06-21  8:46     ` Richard Earnshaw (lists)
2017-06-21  8:46       ` Richard Earnshaw (lists)
2017-06-21  9:03   ` Wilco Dijkstra
2017-06-21 17:05 ` Jeff Law
2017-06-21 17:47   ` Wilco Dijkstra
2017-06-22 16:10     ` Jeff Law
2017-06-22 22:57       ` Wilco Dijkstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170619194541.GB2123@tucnak \
    --to=jakub@redhat.com \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=joseph@codesourcery.com \
    --cc=law@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).