public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Richard Sandiford <richard.sandiford@arm.com>
To: gcc-patches@gcc.gnu.org
Cc: Richard Sandiford <richard.sandiford@arm.com>
Subject: [PATCH 19/19] aarch64: Make stack smash canary protect saved registers
Date: Tue, 12 Sep 2023 16:25:29 +0100	[thread overview]
Message-ID: <20230912152529.3322336-20-richard.sandiford@arm.com> (raw)
In-Reply-To: <20230912152529.3322336-1-richard.sandiford@arm.com>

AArch64 normally puts the saved registers near the bottom of the frame,
immediately above any dynamic allocations.  But this means that a
stack-smash attack on those dynamic allocations could overwrite the
saved registers without needing to reach as far as the stack smash
canary.

The same thing could also happen for variable-sized arguments that are
passed by value, since those are allocated before a call and popped on
return.

This patch avoids that by putting the locals (and thus the canary) below
the saved registers when stack smash protection is active.

The patch fixes CVE-2023-4039.

gcc/
	* config/aarch64/aarch64.cc (aarch64_save_regs_above_locals_p):
	New function.
	(aarch64_layout_frame): Use it to decide whether locals should
	go above or below the saved registers.
	(aarch64_expand_prologue): Update stack layout comment.
	Emit a stack tie after the final adjustment.

gcc/testsuite/
	* gcc.target/aarch64/stack-protector-8.c: New test.
	* gcc.target/aarch64/stack-protector-9.c: Likewise.
---
 gcc/config/aarch64/aarch64.cc                 | 46 +++++++--
 .../gcc.target/aarch64/stack-protector-8.c    | 95 +++++++++++++++++++
 .../gcc.target/aarch64/stack-protector-9.c    | 33 +++++++
 3 files changed, 168 insertions(+), 6 deletions(-)
 create mode 100644 gcc/testsuite/gcc.target/aarch64/stack-protector-8.c
 create mode 100644 gcc/testsuite/gcc.target/aarch64/stack-protector-9.c

diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc
index 51e57370807..3739a44bfd9 100644
--- a/gcc/config/aarch64/aarch64.cc
+++ b/gcc/config/aarch64/aarch64.cc
@@ -8433,6 +8433,20 @@ aarch64_needs_frame_chain (void)
   return aarch64_use_frame_pointer;
 }
 
+/* Return true if the current function should save registers above
+   the locals area, rather than below it.  */
+
+static bool
+aarch64_save_regs_above_locals_p ()
+{
+  /* When using stack smash protection, make sure that the canary slot
+     comes between the locals and the saved registers.  Otherwise,
+     it would be possible for a carefully sized smash attack to change
+     the saved registers (particularly LR and FP) without reaching the
+     canary.  */
+  return crtl->stack_protect_guard;
+}
+
 /* Mark the registers that need to be saved by the callee and calculate
    the size of the callee-saved registers area and frame record (both FP
    and LR may be omitted).  */
@@ -8444,6 +8458,7 @@ aarch64_layout_frame (void)
   poly_int64 vector_save_size = GET_MODE_SIZE (vector_save_mode);
   bool frame_related_fp_reg_p = false;
   aarch64_frame &frame = cfun->machine->frame;
+  poly_int64 top_of_locals = -1;
 
   frame.emit_frame_chain = aarch64_needs_frame_chain ();
 
@@ -8510,9 +8525,16 @@ aarch64_layout_frame (void)
 	&& !crtl->abi->clobbers_full_reg_p (regno))
       frame.reg_offset[regno] = SLOT_REQUIRED;
 
+  bool regs_at_top_p = aarch64_save_regs_above_locals_p ();
 
   poly_int64 offset = crtl->outgoing_args_size;
   gcc_assert (multiple_p (offset, STACK_BOUNDARY / BITS_PER_UNIT));
+  if (regs_at_top_p)
+    {
+      offset += get_frame_size ();
+      offset = aligned_upper_bound (offset, STACK_BOUNDARY / BITS_PER_UNIT);
+      top_of_locals = offset;
+    }
   frame.bytes_below_saved_regs = offset;
   frame.sve_save_and_probe = INVALID_REGNUM;
 
@@ -8652,15 +8674,18 @@ aarch64_layout_frame (void)
      at expand_prologue.  */
   gcc_assert (crtl->is_leaf || maybe_ne (saved_regs_size, 0));
 
-  offset += get_frame_size ();
-  offset = aligned_upper_bound (offset, STACK_BOUNDARY / BITS_PER_UNIT);
-  auto top_of_locals = offset;
-
+  if (!regs_at_top_p)
+    {
+      offset += get_frame_size ();
+      offset = aligned_upper_bound (offset, STACK_BOUNDARY / BITS_PER_UNIT);
+      top_of_locals = offset;
+    }
   offset += frame.saved_varargs_size;
   gcc_assert (multiple_p (offset, STACK_BOUNDARY / BITS_PER_UNIT));
   frame.frame_size = offset;
 
   frame.bytes_above_hard_fp = frame.frame_size - frame.bytes_below_hard_fp;
+  gcc_assert (known_ge (top_of_locals, 0));
   frame.bytes_above_locals = frame.frame_size - top_of_locals;
 
   frame.initial_adjust = 0;
@@ -9979,10 +10004,10 @@ aarch64_epilogue_uses (int regno)
 	|  for register varargs         |
 	|                               |
 	+-------------------------------+
-	|  local variables              | <-- frame_pointer_rtx
+	|  local variables (1)          | <-- frame_pointer_rtx
 	|                               |
 	+-------------------------------+
-	|  padding                      |
+	|  padding (1)                  |
 	+-------------------------------+
 	|  callee-saved registers       |
 	+-------------------------------+
@@ -9994,6 +10019,10 @@ aarch64_epilogue_uses (int regno)
 	+-------------------------------+
 	|  SVE predicate registers      |
 	+-------------------------------+
+	|  local variables (2)          |
+	+-------------------------------+
+	|  padding (2)                  |
+	+-------------------------------+
 	|  dynamic allocation           |
 	+-------------------------------+
 	|  padding                      |
@@ -10003,6 +10032,9 @@ aarch64_epilogue_uses (int regno)
 	+-------------------------------+
 	|                               | <-- stack_pointer_rtx (aligned)
 
+   The regions marked (1) and (2) are mutually exclusive.  (2) is used
+   when aarch64_save_regs_above_locals_p is true.
+
    Dynamic stack allocations via alloca() decrease stack_pointer_rtx
    but leave frame_pointer_rtx and hard_frame_pointer_rtx
    unchanged.
@@ -10198,6 +10230,8 @@ aarch64_expand_prologue (void)
   gcc_assert (known_eq (bytes_below_sp, final_adjust));
   aarch64_allocate_and_probe_stack_space (tmp1_rtx, tmp0_rtx, final_adjust,
 					  !frame_pointer_needed, true);
+  if (emit_frame_chain && maybe_ne (final_adjust, 0))
+    aarch64_emit_stack_tie (hard_frame_pointer_rtx);
 }
 
 /* Return TRUE if we can use a simple_return insn.
diff --git a/gcc/testsuite/gcc.target/aarch64/stack-protector-8.c b/gcc/testsuite/gcc.target/aarch64/stack-protector-8.c
new file mode 100644
index 00000000000..e71d820e365
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/stack-protector-8.c
@@ -0,0 +1,95 @@
+/* { dg-options " -O -fstack-protector-strong -mstack-protector-guard=sysreg -mstack-protector-guard-reg=tpidr2_el0 -mstack-protector-guard-offset=16" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+
+void g(void *);
+__SVBool_t *h(void *);
+
+/*
+** test1:
+**	sub	sp, sp, #288
+**	stp	x29, x30, \[sp, #?272\]
+**	add	x29, sp, #?272
+**	mrs	(x[0-9]+), tpidr2_el0
+**	ldr	(x[0-9]+), \[\1, #?16\]
+**	str	\2, \[sp, #?264\]
+**	mov	\2, #?0
+**	add	x0, sp, #?8
+**	bl	g
+**	...
+**	mrs	.*
+**	...
+**	bne	.*
+**	...
+**	ldp	x29, x30, \[sp, #?272\]
+**	add	sp, sp, #?288
+**	ret
+**	bl	__stack_chk_fail
+*/
+int test1() {
+  int y[0x40];
+  g(y);
+  return 1;
+}
+
+/*
+** test2:
+**	stp	x29, x30, \[sp, #?-16\]!
+**	mov	x29, sp
+**	sub	sp, sp, #1040
+**	mrs	(x[0-9]+), tpidr2_el0
+**	ldr	(x[0-9]+), \[\1, #?16\]
+**	str	\2, \[sp, #?1032\]
+**	mov	\2, #?0
+**	add	x0, sp, #?8
+**	bl	g
+**	...
+**	mrs	.*
+**	...
+**	bne	.*
+**	...
+**	add	sp, sp, #?1040
+**	ldp	x29, x30, \[sp\], #?16
+**	ret
+**	bl	__stack_chk_fail
+*/
+int test2() {
+  int y[0x100];
+  g(y);
+  return 1;
+}
+
+#pragma GCC target "+sve"
+
+/*
+** test3:
+**	stp	x29, x30, \[sp, #?-16\]!
+**	mov	x29, sp
+**	addvl	sp, sp, #-18
+**	...
+**	str	p4, \[sp\]
+**	...
+**	sub	sp, sp, #272
+**	mrs	(x[0-9]+), tpidr2_el0
+**	ldr	(x[0-9]+), \[\1, #?16\]
+**	str	\2, \[sp, #?264\]
+**	mov	\2, #?0
+**	add	x0, sp, #?8
+**	bl	h
+**	...
+**	mrs	.*
+**	...
+**	bne	.*
+**	...
+**	add	sp, sp, #?272
+**	...
+**	ldr	p4, \[sp\]
+**	...
+**	addvl	sp, sp, #18
+**	ldp	x29, x30, \[sp\], #?16
+**	ret
+**	bl	__stack_chk_fail
+*/
+__SVBool_t test3() {
+  int y[0x40];
+  return *h(y);
+}
diff --git a/gcc/testsuite/gcc.target/aarch64/stack-protector-9.c b/gcc/testsuite/gcc.target/aarch64/stack-protector-9.c
new file mode 100644
index 00000000000..58f322aa480
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/stack-protector-9.c
@@ -0,0 +1,33 @@
+/* { dg-options "-O2 -mcpu=neoverse-v1 -fstack-protector-all" } */
+/* { dg-final { check-function-bodies "**" "" } } */
+
+/*
+** main:
+**	...
+**	stp	x29, x30, \[sp, #?-[0-9]+\]!
+**	...
+**	sub	sp, sp, #[0-9]+
+**	...
+**	str	x[0-9]+, \[x29, #?-8\]
+**	...
+*/
+int f(const char *);
+void g(void *);
+int main(int argc, char* argv[])
+{
+  int a;
+  int b;
+  char c[2+f(argv[1])];
+  int d[0x100];
+  char y;
+
+  y=42; a=4; b=10;
+  c[0] = 'h'; c[1] = '\0';
+
+  c[f(argv[2])] = '\0';
+
+  __builtin_printf("%d %d\n%s\n", a, b, c);
+  g(d);
+
+  return 0;
+}
-- 
2.25.1


  parent reply	other threads:[~2023-09-12 15:25 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-12 15:25 [PATCH 00/19] aarch64: Fix -fstack-protector issue Richard Sandiford
2023-09-12 15:25 ` [PATCH 01/19] aarch64: Use local frame vars in shrink-wrapping code Richard Sandiford
2023-09-12 15:25 ` [PATCH 02/19] aarch64: Avoid a use of callee_offset Richard Sandiford
2023-09-12 15:25 ` [PATCH 03/19] aarch64: Explicitly handle frames with no saved registers Richard Sandiford
2023-09-12 15:25 ` [PATCH 04/19] aarch64: Add bytes_below_saved_regs to frame info Richard Sandiford
2023-09-12 15:25 ` [PATCH 05/19] aarch64: Add bytes_below_hard_fp " Richard Sandiford
2023-09-12 15:25 ` [PATCH 06/19] aarch64: Tweak aarch64_save/restore_callee_saves Richard Sandiford
2023-09-12 15:25 ` [PATCH 07/19] aarch64: Only calculate chain_offset if there is a chain Richard Sandiford
2023-09-12 15:25 ` [PATCH 08/19] aarch64: Rename locals_offset to bytes_above_locals Richard Sandiford
2023-09-12 15:25 ` [PATCH 09/19] aarch64: Rename hard_fp_offset to bytes_above_hard_fp Richard Sandiford
2023-09-12 15:25 ` [PATCH 10/19] aarch64: Tweak frame_size comment Richard Sandiford
2023-09-12 15:25 ` [PATCH 11/19] aarch64: Measure reg_offset from the bottom of the frame Richard Sandiford
2023-09-12 15:25 ` [PATCH 12/19] aarch64: Simplify top of frame allocation Richard Sandiford
2023-09-12 15:25 ` [PATCH 13/19] aarch64: Minor initial adjustment tweak Richard Sandiford
2023-09-12 15:25 ` [PATCH 14/19] aarch64: Tweak stack clash boundary condition Richard Sandiford
2023-09-12 15:25 ` [PATCH 15/19] aarch64: Put LR save probe in first 16 bytes Richard Sandiford
2023-09-12 15:25 ` [PATCH 16/19] aarch64: Simplify probe of final frame allocation Richard Sandiford
2023-09-12 15:25 ` [PATCH 17/19] aarch64: Explicitly record probe registers in frame info Richard Sandiford
2023-09-12 15:25 ` [PATCH 18/19] aarch64: Remove below_hard_fp_saved_regs_size Richard Sandiford
2023-09-12 15:25 ` Richard Sandiford [this message]
2023-09-12 16:45 ` [PATCH 00/19] aarch64: Fix -fstack-protector issue Siddhesh Poyarekar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230912152529.3322336-20-richard.sandiford@arm.com \
    --to=richard.sandiford@arm.com \
    --cc=gcc-patches@gcc.gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).