public inbox for gcc-cvs@sourceware.org
help / color / mirror / Atom feed
* [gcc(refs/vendors/ARM/heads/CVE-2023-4039/gcc-9)] aarch64: Tweak stack clash boundary condition
@ 2023-09-12 15:24 Richard Sandiford
  0 siblings, 0 replies; only message in thread
From: Richard Sandiford @ 2023-09-12 15:24 UTC (permalink / raw)
  To: gcc-cvs

https://gcc.gnu.org/g:4dd8925d95d3d6d89779b494b5f4cfadcf9fa96e

commit 4dd8925d95d3d6d89779b494b5f4cfadcf9fa96e
Author: Richard Sandiford <richard.sandiford@arm.com>
Date:   Tue Jun 27 15:11:44 2023 +0100

    aarch64: Tweak stack clash boundary condition
    
    The AArch64 ABI says that, when stack clash protection is used,
    there can be a maximum of 1KiB of unprobed space at sp on entry
    to a function.  Therefore, we need to probe when allocating
    >= guard_size - 1KiB of data (>= rather than >).  This is what
    GCC does.
    
    If an allocation is exactly guard_size bytes, it is enough to allocate
    those bytes and probe once at offset 1024.  It isn't possible to use a
    single probe at any other offset: higher would conmplicate later code,
    by leaving more unprobed space than usual, while lower would risk
    leaving an entire page unprobed.  For simplicity, the code probes all
    allocations at offset 1024.
    
    Some register saves also act as probes.  If we need to allocate
    more space below the last such register save probe, we need to
    probe the allocation if it is > 1KiB.  Again, this allocation is
    then sometimes (but not always) probed at offset 1024.  This sort of
    allocation is currently only used for outgoing arguments, which are
    rarely this big.
    
    However, the code also probed if this final outgoing-arguments
    allocation was == 1KiB, rather than just > 1KiB.  This isn't
    necessary, since the register save then probes at offset 1024
    as required.  Continuing to probe allocations of exactly 1KiB
    would complicate later patches.
    
    gcc/
            * config/aarch64/aarch64.c (aarch64_allocate_and_probe_stack_space):
            Don't probe final allocations that are exactly 1KiB in size (after
            unprobed space above the final allocation has been deducted).
    
    gcc/testsuite/
            * gcc.target/aarch64/stack-check-prologue-17.c: New test.

Diff:
---
 gcc/config/aarch64/aarch64.c                       |  6 ++-
 .../gcc.target/aarch64/stack-check-prologue-17.c   | 55 ++++++++++++++++++++++
 2 files changed, 60 insertions(+), 1 deletion(-)

diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c
index 2681e0c2bb90..4c9e11cd7cff 100644
--- a/gcc/config/aarch64/aarch64.c
+++ b/gcc/config/aarch64/aarch64.c
@@ -5506,6 +5506,8 @@ aarch64_allocate_and_probe_stack_space (rtx temp1, rtx temp2,
   HOST_WIDE_INT guard_size
     = 1 << PARAM_VALUE (PARAM_STACK_CLASH_PROTECTION_GUARD_SIZE);
   HOST_WIDE_INT guard_used_by_caller = STACK_CLASH_CALLER_GUARD;
+  HOST_WIDE_INT byte_sp_alignment = STACK_BOUNDARY / BITS_PER_UNIT;
+  gcc_assert (multiple_p (poly_size, byte_sp_alignment));
   /* When doing the final adjustment for the outgoing argument size we can't
      assume that LR was saved at position 0.  So subtract it's offset from the
      ABI safe buffer so that we don't accidentally allow an adjustment that
@@ -5513,7 +5515,9 @@ aarch64_allocate_and_probe_stack_space (rtx temp1, rtx temp2,
      probing.  */
   HOST_WIDE_INT min_probe_threshold
     = final_adjustment_p
-      ? guard_used_by_caller - cfun->machine->frame.reg_offset[LR_REGNUM]
+      ? (guard_used_by_caller
+	 + byte_sp_alignment
+	 - cfun->machine->frame.reg_offset[LR_REGNUM])
       : guard_size - guard_used_by_caller;
 
   poly_int64 frame_size = cfun->machine->frame.frame_size;
diff --git a/gcc/testsuite/gcc.target/aarch64/stack-check-prologue-17.c b/gcc/testsuite/gcc.target/aarch64/stack-check-prologue-17.c
new file mode 100644
index 000000000000..0d8a25d73a24
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/stack-check-prologue-17.c
@@ -0,0 +1,55 @@
+/* { dg-options "-O2 -fstack-clash-protection -fomit-frame-pointer --param stack-clash-protection-guard-size=12" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+
+void f(int, ...);
+void g();
+
+/*
+** test1:
+**	...
+**	str	x30, \[sp\]
+**	sub	sp, sp, #1024
+**	cbnz	w0, .*
+**	bl	g
+**	...
+*/
+int test1(int z) {
+  __uint128_t x = 0;
+  int y[0x400];
+  if (z)
+    {
+      f(0, 0, 0, 0, 0, 0, 0, &y,
+	x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x,
+	x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x,
+	x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x,
+	x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x);
+    }
+  g();
+  return 1;
+}
+
+/*
+** test2:
+**	...
+**	str	x30, \[sp\]
+**	sub	sp, sp, #1040
+**	str	xzr, \[sp\]
+**	cbnz	w0, .*
+**	bl	g
+**	...
+*/
+int test2(int z) {
+  __uint128_t x = 0;
+  int y[0x400];
+  if (z)
+    {
+      f(0, 0, 0, 0, 0, 0, 0, &y,
+	x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x,
+	x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x,
+	x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x,
+	x, x, x, x, x, x, x, x, x, x, x, x, x, x, x, x,
+	x);
+    }
+  g();
+  return 1;
+}

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2023-09-12 15:24 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-09-12 15:24 [gcc(refs/vendors/ARM/heads/CVE-2023-4039/gcc-9)] aarch64: Tweak stack clash boundary condition Richard Sandiford

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).