public inbox for gcc-cvs@sourceware.org
help / color / mirror / Atom feed
From: Alexandre Oliva <aoliva@gcc.gnu.org>
To: gcc-cvs@gcc.gnu.org
Subject: [gcc(refs/users/aoliva/heads/testme)] hardcfr: check before potential sibcalls
Date: Thu,  1 Sep 2022 04:22:04 +0000 (GMT)	[thread overview]
Message-ID: <20220901042204.1ADD7382EA0A@sourceware.org> (raw)

https://gcc.gnu.org/g:81ab581a6775b5aaca8083815ef54ec91b7b4fbf

commit 81ab581a6775b5aaca8083815ef54ec91b7b4fbf
Author: Alexandre Oliva <oliva@adacore.com>
Date:   Fri Aug 26 02:51:16 2022 -0300

    hardcfr: check before potential sibcalls

Diff:
---
 .../doc/gnat_rm/security_hardening_features.rst    | 140 +++++-
 gcc/common.opt                                     |   4 +
 gcc/doc/invoke.texi                                |  42 +-
 gcc/gimple-harden-control-flow.cc                  | 523 ++++++++++++++++-----
 .../c-c++-common/torture/harden-cfr-notail.c       |   8 +
 .../c-c++-common/torture/harden-cfr-tail.c         |  71 ++-
 6 files changed, 635 insertions(+), 153 deletions(-)

diff --git a/gcc/ada/doc/gnat_rm/security_hardening_features.rst b/gcc/ada/doc/gnat_rm/security_hardening_features.rst
index 9d762e7c8cc..54614145a97 100644
--- a/gcc/ada/doc/gnat_rm/security_hardening_features.rst
+++ b/gcc/ada/doc/gnat_rm/security_hardening_features.rst
@@ -263,25 +263,127 @@ For each block that is marked as visited, the mechanism checks that at
 least one of its predecessors, and at least one of its successors, are
 also marked as visited.
 
-Verification is performed just before returns and tail calls.
-Verification may also be performed before noreturn calls, whether only
-nothrow ones, with :switch:`-fhardcfr-check-noreturn-calls=nothrow`,
-or all of them, with :switch:`-fhardcfr-check-noreturn-calls=always`.
-Furthermore, any subprogram from which an exception may escape, i.e.,
-that may raise or propagate an exception that isn't handled
-internally, is automatically enclosed by a cleanup handler that
-performs verification, unless this is disabled with
-:switch:`-fno-hardcfr-check-exceptions`.  When a noreturn call returns
-control to its caller through an exception, verification may have
-already been performed before the call, assuming
-:switch:`-fhardcfr-check-noreturn-calls=always` is in effect.  The
-compiler arranges for already-checked noreturn calls without a
-preexisting handler to bypass the implicitly-added cleanup handler and
-thus the redundant check, but calls with a local handler will use it,
-which modifies the set of visited blocks, and checking will take place
-againwhen the caller reaches the next verification point, whether it
-is a return or reraise statement after the exception is otherwise
-handled, or even another noreturn call.
+Verification is performed just before a subprogram returns.  The
+following fragment:
+
+.. code-block:: ada
+
+   if X then
+     Y := F (Z);
+     return;
+   end if;
+
+
+gets turned into:
+
+.. code-block:: ada
+
+   type Visited_Bitmap is array (1..N) of Boolean with Pack;
+   Visited : Visited_Bitmap := (others => False);
+   --  Bitmap of visited blocks.  N is the basic block count.
+   [...]
+   --  Basic block #I
+   Visited(I) := True;
+   if X then
+     --  Basic block #J
+     Visited(J) := True;
+     Y := F (Z);
+     CFR.Check (N, Visited'Access, CFG'Access);
+     --  CFR is a hypothetical package whose Check procedure calls
+     --  libgcc's __hardcfr_check, that traps if the Visited bitmap
+     --  does not hold a valid path in CFG, the run-time
+     --  representation of the control flow graph in the enclosing
+     --  subprogram.
+     return;
+   end if;
+   --  Basic block #K
+   Visited(K) := True;
+
+
+Verification would also be performed before must-tail calls, and
+before early-marked potential tail calls, but these do not occur in
+practice, as potential tail-calls are only detected in late
+optimization phases, too late for this transformation to act on it.
+
+In order to avoid adding verification after potential tail calls,
+which would prevent tail-call optimization, we recognize returning
+calls, i.e., calls whose result, if any, is returned by the calling
+subprogram to its caller immediately after the call returns.
+Verification is performed before such calls, whether or not they are
+ultimately optimized to tail calls.  This behavior is enabled by
+default whenever sibcall optimization is enabled (see
+:switch:`-foptimize-sibling-calls`); it may be disabled with
+:switch:`-fno-hardcfr-check-returning-calls`, or enabled with
+:switch:`-fhardcfr-check-returning-calls`, regardless of the
+optimization, but the lack of other optimizations may prevent calls
+from being recognized as returning calls:
+
+.. code-block:: ada
+
+     --  -fhardcfr-check-returning-calls: CFR.Check here.
+     P (X);
+     --  -fno-hardcfr-check-returning-calls: CFR.Check here.
+     return;
+
+or:
+
+.. code-block:: ada
+
+     --  -fhardcfr-check-returning-calls: CFR.Check here.
+     R := F (X);
+     --  -fno-hardcfr-check-returning-calls: CFR.Check here.
+     return R;
+
+
+Any subprogram from which an exception may escape, i.e., that may
+raise or propagate an exception that isn't handled internally, is
+conceptually enclosed by a cleanup handler that performs verification,
+unless this is disabled with :switch:`-fno-hardcfr-check-exceptions`.
+With this feature enabled, a subprogram body containing:
+
+.. code-block:: ada
+
+     --  ...
+       Y := F (X);  -- May raise exceptions.
+     --  ...
+       raise E;  -- Not handled internally.
+     --  ...
+
+
+gets modified as follows:
+
+.. code-block:: ada
+
+   begin
+     --  ...
+       Y := F (X);  -- May raise exceptions.
+     --  ...
+       raise E;  -- Not handled internally.
+     --  ...
+   exception
+     when others =>
+       CFR.Check (N, Visited'Access, CFG'Access);
+       raise;
+   end;
+
+
+Verification may also be performed before No_Return calls, whether
+only nothrow ones, with
+:switch:`-fhardcfr-check-noreturn-calls=nothrow`, or all of them, with
+:switch:`-fhardcfr-check-noreturn-calls=always`.  The default is
+:switch:`-fhardcfr-check-noreturn-calls=never` for this feature, that
+disables checking before No_Return calls.
+
+When a No_Return call returns control to its caller through an
+exception, verification may have already been performed before the
+call, assuming :switch:`-fhardcfr-check-noreturn-calls=always` is in
+effect.  The compiler arranges for already-checked No_Return calls
+without a preexisting handler to bypass the implicitly-added cleanup
+handler and thus the redundant check, but a local exception handler,
+if present, will modify the set of visited blocks, and checking will
+take place again when the caller reaches the next verification point,
+whether it is a return or reraise statement after the exception is
+otherwise handled, or even another No_Return call.
 
 The instrumentation for hardening with control flow redundancy can be
 observed in dump files generated by the command-line option
diff --git a/gcc/common.opt b/gcc/common.opt
index 57f01330fcf..09cc46d7dbc 100644
--- a/gcc/common.opt
+++ b/gcc/common.opt
@@ -1801,6 +1801,10 @@ fharden-control-flow-redundancy
 Common Var(flag_harden_control_flow_redundancy) Optimization
 Harden control flow by recording and checking execution paths.
 
+fhardcfr-check-returning-calls
+Common Var(flag_harden_control_flow_redundancy_check_returning_calls) Init(-1) Optimization
+Check CFR execution paths also before calls followed by returns of their results.
+
 fhardcfr-check-exceptions
 Common Var(flag_harden_control_flow_redundancy_check_exceptions) Init(-1) Optimization
 Check CFR execution paths also when exiting a function through an exception.
diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi
index 1a8409d6e3e..f03106883ad 100644
--- a/gcc/doc/invoke.texi
+++ b/gcc/doc/invoke.texi
@@ -625,6 +625,7 @@ Objective-C and Objective-C++ Dialects}.
 -fcf-protection=@r{[}full@r{|}branch@r{|}return@r{|}none@r{|}check@r{]} @gol
 -fharden-compares -fharden-conditional-branches @gol
 -fharden-control-flow-redundancy  -fhardcfr-check-exceptions  @gol
+-fhardcfr-check-returning-calls  @gol
 -fhardcfr-check-noreturn-calls=@r{[}always@r{|}nothrow@r{|}never@r{]}  @gol
 -fstack-protector  -fstack-protector-all  -fstack-protector-strong @gol
 -fstack-protector-explicit  -fstack-check @gol
@@ -16557,12 +16558,23 @@ conditionals.
 @item -fharden-control-flow-redundancy
 @opindex fharden-control-flow-redundancy
 Emit extra code to set booleans when entering basic blocks, and to
-verify, at function exits (returns, before tail calls, and optionally,
-before escaping exceptions with @option{-fhardcfr-check-exceptions}, and
-before noreturn calls with @option{-fhardcfr-check-noreturn-calls}), and
-trap when they indicate an execution path that is incompatible with the
-control flow graph.  Tuning options @option{--param hardcfr-max-blocks}
-and @option{--param hardcfr-max-inline-blocks} are available.
+verify and trap, at function exits, when the booleans do not form an
+execution path that is compatible with the control flow graph.
+
+Verification takes place before returns, before mandatory tail calls
+(see below) and, optionally, before escaping exceptions with
+@option{-fhardcfr-check-exceptions}, before returning calls with
+@option{-fhardcfr-check-returning-calls}, and before noreturn calls with
+@option{-fhardcfr-check-noreturn-calls}).  Tuning options
+@option{--param hardcfr-max-blocks} and @option{--param
+hardcfr-max-inline-blocks} are available.
+
+Tail call optimization takes place too late to affect control flow
+redundancy, but calls annotated as mandatory tail calls by language
+front-ends, and any calls marked early enough as potential tail calls
+would also have verification issued before the call, but these
+possibilities are merely theoretical, as these conditions can only be
+met when using custom compiler plugins.
 
 @item -fhardcfr-check-exceptions
 @opindex fhardcfr-check-exceptions
@@ -16573,6 +16585,24 @@ escape points, as if the function body was wrapped with a cleanup
 handler that performed the check and reraised.  This option is enabled
 by default; use @option{-fno-hardcfr-check-exceptions} to disable it.
 
+@item -fhardcfr-check-returning-calls
+@opindex fhardcfr-check-returning-calls
+@opindex fno-hardcfr-check-returning-calls
+When @option{-fharden-control-flow-redundancy} is active, check the
+recorded execution path against the control flow graph before any
+function call immediately followed by a return of its result, if any, so
+as to not prevent tail-call optimization, whether or not it is
+ultimately optimized to a tail call.
+
+This option is enabled by default, whenever
+@option{-foptimize-sibling-calls} is enabled, but it can be enabled (or
+disabled, using its negated form) explicitly, regardless of the
+optimization.  Note, however, that without other optimizations, the
+initially-unified return block for each function remains separated from
+any preceding statements, and therefore calls will not be found to be
+immediately followed by a return statement, and so they won't be
+recognized nor handled as returning calls.
+
 @item -fhardcfr-check-noreturn-calls=@r{[}always@r{|}nothrow@r{|}never@r{]}
 @opindex fhardcfr-check-noreturn-calls
 When @option{-fharden-control-flow-redundancy} is active, check the
diff --git a/gcc/gimple-harden-control-flow.cc b/gcc/gimple-harden-control-flow.cc
index 6d03e24d38a..42dee6196bb 100644
--- a/gcc/gimple-harden-control-flow.cc
+++ b/gcc/gimple-harden-control-flow.cc
@@ -128,6 +128,270 @@ public:
 
 }
 
+/* Return TRUE iff CFR checks should be inserted before returning
+   calls.  */
+
+static bool
+check_returning_calls_p ()
+{
+  return
+    flag_harden_control_flow_redundancy_check_returning_calls > 0
+    || (flag_harden_control_flow_redundancy_check_returning_calls < 0
+	/* Gates pass_tail_calls.  */
+	&& flag_optimize_sibling_calls
+	/* Gates pass_all_ptimizations.  */
+	&& optimize >= 1 && !optimize_debug);
+}
+
+/* Scan BB from the end, updating *RETPTR if given as return stmts and
+   copies are found.  Return a call or a stmt that cannot appear after
+   a tail call, or NULL if the top of the block is reached without
+   finding any.  */
+
+static gimple *
+hardcfr_scan_block (basic_block bb, tree **retptr)
+{
+  gimple_stmt_iterator gsi;
+  for (gsi = gsi_last_bb (bb); !gsi_end_p (gsi); gsi_prev (&gsi))
+    {
+      gimple *stmt = gsi_stmt (gsi);
+
+      /* Ignore labels, returns, nops, clobbers and debug stmts.  */
+      if (gimple_code (stmt) == GIMPLE_LABEL
+	  || gimple_code (stmt) == GIMPLE_NOP
+	  || gimple_code (stmt) == GIMPLE_PREDICT
+	  || gimple_clobber_p (stmt)
+	  || is_gimple_debug (stmt))
+	continue;
+
+      if (gimple_code (stmt) == GIMPLE_RETURN)
+	{
+	  greturn *gret = as_a <greturn *> (stmt);
+	  if (retptr)
+	    {
+	      gcc_checking_assert (!*retptr);
+	      *retptr = gimple_return_retval_ptr (gret);
+	    }
+	  continue;
+	}
+
+      /* Check for a call.  */
+      if (is_gimple_call (stmt))
+	return stmt;
+
+      /* Allow simple copies to the return value, updating the return
+	 value to be found in earlier assignments.  */
+      if (retptr && *retptr && gimple_assign_single_p (stmt)
+	  && **retptr == gimple_assign_lhs (stmt))
+	{
+	  *retptr = gimple_assign_rhs1_ptr (stmt);
+	  continue;
+	}
+
+      return stmt;
+    }
+  
+  /* Any other kind of stmt will prevent a tail call.  */
+  return NULL;
+}
+
+/* Return TRUE iff CALL is to be preceded by a CFR checkpoint, i.e.,
+   if it's a returning call (one whose result is ultimately returned
+   without intervening non-copy statements) and we're checking
+   returning calls, a __builtin_return call (noreturn with a path to
+   the exit block), a must-tail call, or a tail call.  */
+
+static bool
+returning_call_p (gcall *call)
+{
+  if (!(gimple_call_noreturn_p (call)
+	|| gimple_call_must_tail_p (call)
+	|| gimple_call_tail_p (call)
+	|| check_returning_calls_p ()))
+    return false;
+
+  /* Quickly check that there's a path to exit compatible with a
+     returning call.  Detect infinite loops through the counter.  */
+  basic_block bb = gimple_bb (call);
+  auto_vec<basic_block, 10> path;
+  for (int i = n_basic_blocks_for_fn (cfun);
+       bb != EXIT_BLOCK_PTR_FOR_FN (cfun) && i--;
+       bb = single_succ (bb))
+    if (!single_succ_p (bb)
+	|| (single_succ_edge (bb)->flags & EDGE_EH) != 0)
+      return false;
+    else
+      path.safe_push (bb);
+
+  /* Check the stmts in the blocks and trace the return value.  */
+  tree *retptr = NULL;
+  for (;;)
+    {
+      gcc_checking_assert (!path.is_empty ());
+      gimple *stop = hardcfr_scan_block (path.pop (), &retptr);
+      if (stop)
+	{
+	  if (stop != call)
+	    return false;
+	  gcc_checking_assert (path.is_empty ());
+	  break;
+	}
+    }
+      
+  return (gimple_call_noreturn_p (call)
+	  || gimple_call_must_tail_p (call)
+	  || gimple_call_tail_p (call)
+	  || (gimple_call_lhs (call) == (retptr ? *retptr : NULL)
+	      && check_returning_calls_p ()));
+}
+
+typedef auto_vec<edge, 10> chk_edges_t;
+
+/* Declare for mutual recursion.  */
+static bool hardcfr_sibcall_search_preds (basic_block bb,
+					  chk_edges_t &chk_edges,
+					  int &count_chkcall,
+					  auto_sbitmap &chkcall_blocks,
+					  int &count_postchk,
+					  auto_sbitmap &postchk_blocks,
+					  tree *retptr);
+
+/* Search backwards from the end of BB for a mandatory or potential
+   sibcall.  Schedule the block to be handled sort-of like noreturn if
+   so.  Recurse to preds, with updated RETPTR, if the block only
+   contains stmts that may follow such a call, scheduling checking at
+   edges and marking blocks as post-check as needed.  Return true iff,
+   at the end of the block, a check will have already been
+   performed.  */
+
+static bool
+hardcfr_sibcall_search_block (basic_block bb,
+			      chk_edges_t &chk_edges,
+			      int &count_chkcall,
+			      auto_sbitmap &chkcall_blocks,
+			      int &count_postchk,
+			      auto_sbitmap &postchk_blocks,
+			      tree *retptr)
+{
+  /* Conditionals and internal exceptions rule out tail calls.  */
+  if (!single_succ_p (bb)
+      || (single_succ_edge (bb)->flags & EDGE_EH) != 0)
+    return false;
+
+  gimple *stmt = hardcfr_scan_block (bb, &retptr);
+  if (!stmt)
+    return hardcfr_sibcall_search_preds (bb, chk_edges,
+					 count_chkcall, chkcall_blocks,
+					 count_postchk, postchk_blocks,
+					 retptr);
+
+  if (!is_a <gcall *> (stmt))
+    return false;
+
+  /* Avoid disrupting mandatory or early-marked tail calls,
+     inserting the check before them.  This works for
+     must-tail calls, but tail calling as an optimization is
+     detected too late for us.  
+
+     Also check for noreturn calls here.  Noreturn calls won't
+     normally have edges to exit, so they won't be found here,
+     but __builtin_return does, and we must check before
+     it, so handle it like a tail call.  */
+  gcall *call = as_a <gcall *> (stmt);
+  if (!(gimple_call_noreturn_p (call)
+	|| gimple_call_must_tail_p (call)
+	|| gimple_call_tail_p (call)
+	|| (gimple_call_lhs (call) == (retptr ? *retptr : NULL)
+	    && check_returning_calls_p ())))
+    return false;
+
+  gcc_checking_assert (returning_call_p (call));
+
+  /* We found a call that is to be preceded by checking.  */
+  if (bitmap_set_bit (chkcall_blocks, bb->index))
+    ++count_chkcall;
+  else
+    gcc_unreachable ();
+  return true;
+}
+
+
+/* Search preds of BB for a mandatory or potential sibcall or
+   returning call, and arrange for the blocks containing them to have
+   a check inserted before the call, like noreturn calls.  If any
+   preds are found to perform checking, schedule checks at the edges
+   of those that don't, and mark BB as postcheck..  */
+
+static bool
+hardcfr_sibcall_search_preds (basic_block bb,
+			      chk_edges_t &chk_edges,
+			      int &count_chkcall,
+			      auto_sbitmap &chkcall_blocks,
+			      int &count_postchk,
+			      auto_sbitmap &postchk_blocks,
+			      tree *retptr)
+{
+  /* For the exit block, we wish to force a check at every
+     predecessor, so pretend we've already found a pred that had
+     checking, so that we schedule checking at every one of its pred
+     edges.  */
+  bool first = bb->index >= NUM_FIXED_BLOCKS;
+  bool postchecked = true;
+
+  gphi *retphi = NULL;
+  if (retptr && *retptr && TREE_CODE (*retptr) == SSA_NAME
+      && !SSA_NAME_IS_DEFAULT_DEF (*retptr)
+      && SSA_NAME_DEF_STMT (*retptr)
+      && is_a <gphi *> (SSA_NAME_DEF_STMT (*retptr))
+      && gimple_bb (SSA_NAME_DEF_STMT (*retptr)) == bb)
+    {
+      retphi = as_a <gphi *> (SSA_NAME_DEF_STMT (*retptr));
+      gcc_checking_assert (gimple_phi_result (retphi) == *retptr);
+    }
+
+  for (int i = EDGE_COUNT (bb->preds); i--; first = false)
+    {
+      edge e = EDGE_PRED (bb, i);
+
+      bool checked
+	= hardcfr_sibcall_search_block (e->src, chk_edges,
+					count_chkcall, chkcall_blocks,
+					count_postchk, postchk_blocks,
+					!retphi ? retptr
+					: gimple_phi_arg_def_ptr (retphi, i));
+
+      if (first)
+	{
+	  postchecked = checked;
+	  continue;
+	}
+
+      /* When we first find a checked block, force a check at every
+	 other incoming edge we've already visited, and those we
+	 visit afterwards that don't have their own check, so that
+	 when we reach BB, the check has already been performed.  */
+      if (!postchecked && checked)
+	{
+	  for (int j = EDGE_COUNT (bb->preds); --j > i; )
+	    chk_edges.safe_push (EDGE_PRED (bb, j));
+	  postchecked = true;
+	}
+      if (postchecked && !checked)
+	chk_edges.safe_push (EDGE_PRED (bb, i));
+    }
+
+  if (postchecked && bb->index >= NUM_FIXED_BLOCKS)
+    {
+      if (bitmap_set_bit (postchk_blocks, bb->index))
+	count_postchk++;
+      else
+	gcc_unreachable ();
+    }
+
+  return postchecked;
+}
+
+
 class rt_bb_visited
 {
   /* Use a sufficiently wide unsigned type to hold basic block numbers.  */
@@ -263,7 +527,7 @@ class rt_bb_visited
 
 public:
   /* Prepare to add control flow redundancy testing to CFUN.  */
-  rt_bb_visited (int noreturn_blocks)
+  rt_bb_visited (int checkpoints)
     : nblocks (n_basic_blocks_for_fn (cfun)),
       vword_type (NULL), ckseq (NULL), rtcfg (NULL)
   {
@@ -347,8 +611,7 @@ public:
     gimple_seq_add_stmt (&ckseq, detach);
 
     if (nblocks - NUM_FIXED_BLOCKS > blknum (param_hardcfr_max_inline_blocks)
-	|| (EDGE_COUNT (EXIT_BLOCK_PTR_FOR_FN (cfun)->preds)
-	    + noreturn_blocks > 1))
+	|| checkpoints > 1)
       {
 	/* Make sure vword_bits is wide enough for the representation
 	   of nblocks in rtcfg.  Compare with vword_bits << vword_bits,
@@ -373,63 +636,32 @@ public:
     gimple_seq_add_stmt (&ckseq, ckfail_init);
   }
 
-  /* Insert SEQ before a resx, or noreturn or tail call at the end of
-     INSBB, and return TRUE, otherwise return FALSE.  */
-  bool insert_exit_check (gimple_seq seq, basic_block insbb)
+  /* Insert SEQ before a resx or a call in INSBB.  */
+  void insert_exit_check_in_block (gimple_seq seq, basic_block insbb)
   {
-    /* If the returning block ends with a noreturn call, insert
-       checking before it.  This is particularly important for
-       __builtin_return.  Other noreturn calls won't have an edge to
-       the exit block, so they won't even be considered as exit paths.
-
-       Insert-on-edge inserts before other return stmts, but not
-       before calls, and if a single-block function had the check
-       sequence inserted after a noreturn call, it would be useless,
-       but checking would still flag it as malformed if block 2 has a
-       fallthru edge to the exit block.
-
-       Also avoid disrupting tail calls, inserting the check before
-       them.  This works for must-tail calls, but tail calling as an
-       optimization is detected too late for us.  */
     gimple_stmt_iterator gsi = gsi_last_bb (insbb);
-    gimple *ret = gsi_stmt (gsi);
 
-    if (ret && is_a <gresx *> (ret))
-      {
-	gsi_insert_seq_before (&gsi, seq, GSI_SAME_STMT);
-	return true;
-      }
-
-    if (ret && is_a <greturn *> (ret))
-      {
+    while (!gsi_end_p (gsi))
+      if (is_a <gresx *> (gsi_stmt (gsi))
+	  || is_a <gcall *> (gsi_stmt (gsi)))
+	break;
+      else
 	gsi_prev (&gsi);
-	if (!gsi_end_p (gsi))
-	  ret = gsi_stmt (gsi);
-      }
-    if (ret
-	&& is_a <gcall *> (ret)
-	&& (gimple_call_noreturn_p (ret)
-	    || gimple_call_must_tail_p (as_a <gcall *> (ret))
-	    || gimple_call_tail_p (as_a <gcall *> (ret))))
-      gsi_insert_seq_before (&gsi, seq, GSI_SAME_STMT);
-    else
-      return false;
 
-    return true;
+    gsi_insert_seq_before (&gsi, seq, GSI_SAME_STMT);
   }
 
-  /* Insert SEQ on E, or close enough (e.g., before a noreturn or tail
-     call at the end of E->src).  */
-  void insert_exit_check (gimple_seq seq, edge e)
+  /* Insert SEQ on E.  */
+  void insert_exit_check_on_edge (gimple_seq seq, edge e)
   {
-    if (!insert_exit_check (seq, e->src))
-      gsi_insert_seq_on_edge_immediate (e, seq);
+    gsi_insert_seq_on_edge_immediate (e, seq);
   }
 
-  /* Add checking code on every exit edge, and initialization code on
+  /* Add checking code to CHK_EDGES, and initialization code on
      the entry edge.  Before this point, the CFG has been undisturbed,
      and all the needed data has been collected and safely stowed.  */
-  void check (int count_noreturn, auto_sbitmap const &noreturn_blocks)
+  void check (chk_edges_t &chk_edges,
+	      int count_chkcall, auto_sbitmap const &chkcall_blocks)
   {
     /* If we're using out-of-line checking, create and statically
        initialize the CFG checking representation, generate the
@@ -491,93 +723,115 @@ public:
 						     rtcfg));
 	gimple_seq_add_stmt (&ckseq, call_chk);
 
+	gimple *clobber = gimple_build_assign (visited,
+					       build_clobber
+					       (TREE_TYPE (visited)));
+	gimple_seq_add_stmt (&ckseq, clobber);
+
 	/* If we have multiple exit edges, insert (copies of)
 	   ckseq in all of them.  */
-	for (int i = EDGE_COUNT (EXIT_BLOCK_PTR_FOR_FN (cfun)->preds);
-	     i--; )
+	for (int i = chk_edges.length (); i--; )
 	  {
 	    gimple_seq seq = ckseq;
 	    /* Copy the sequence, unless we're dealing with the
 	       last edge (we're counting down to zero).  */
-	    if (i || count_noreturn)
+	    if (i || count_chkcall)
 	      seq = gimple_seq_copy (seq);
 
-	    edge e = EDGE_PRED (EXIT_BLOCK_PTR_FOR_FN (cfun), i);
+	    edge e = chk_edges[i];
 
 	    if (dump_file)
-	      fprintf (dump_file,
-		       "Inserting out-of-line check in"
-		       " block %i's edge to exit.\n",
-		       e->src->index);
+	      {
+		if (e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
+		  fprintf (dump_file,
+			   "Inserting out-of-line check in"
+			   " block %i's edge to exit.\n",
+			   e->src->index);
+		else
+		  fprintf (dump_file,
+			   "Inserting out-of-line check in"
+			   " block %i's edge to postcheck block %i.\n",
+			   e->src->index, e->dest->index);
+	      }
 
-	    insert_exit_check (seq, e);
+	    insert_exit_check_on_edge (seq, e);
 
-	    gcc_checking_assert (!bitmap_bit_p (noreturn_blocks, e->src->index));
+	    gcc_checking_assert (!bitmap_bit_p (chkcall_blocks, e->src->index));
 	  }
 
 	sbitmap_iterator it;
 	unsigned i;
-	EXECUTE_IF_SET_IN_BITMAP (noreturn_blocks, 0, i, it)
+	EXECUTE_IF_SET_IN_BITMAP (chkcall_blocks, 0, i, it)
 	  {
 	    basic_block bb = BASIC_BLOCK_FOR_FN (cfun, i);
 
 	    gimple_seq seq = ckseq;
-	    gcc_checking_assert (count_noreturn > 0);
-	    if (--count_noreturn)
+	    gcc_checking_assert (count_chkcall > 0);
+	    if (--count_chkcall)
 	      seq = gimple_seq_copy (seq);
 
 	    if (dump_file)
 	      fprintf (dump_file,
-		       "Inserting out-of-line check in noreturn block %i.\n",
+		       "Inserting out-of-line check before stmt in block %i.\n",
 		       bb->index);
 
-	    if (!insert_exit_check (seq, bb))
-	      gcc_unreachable ();
+	    insert_exit_check_in_block (seq, bb);
 	  }
 
-	gcc_checking_assert (count_noreturn == 0);
+	gcc_checking_assert (count_chkcall == 0);
       }
     else
       {
 	/* Inline checking requires a single exit edge.  */
-	gimple *last = gsi_stmt (gsi_last (ckseq));
+	gimple *last = gimple_build_assign (visited,
+					       build_clobber
+					       (TREE_TYPE (visited)));
+	gimple_seq_add_stmt (&ckseq, last);
 
-	if (!count_noreturn)
+	if (!count_chkcall)
 	  {
+	    edge e = single_pred_edge (EXIT_BLOCK_PTR_FOR_FN (cfun));
+
 	    if (dump_file)
-	      fprintf (dump_file,
-		       "Inserting inline check in"
-		       " block %i's edge to exit.\n",
-		       single_pred (EXIT_BLOCK_PTR_FOR_FN (cfun))->index);
+	      {
+		if (e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun))
+		  fprintf (dump_file,
+			   "Inserting out-of-line check in"
+			   " block %i's edge to postcheck block %i.\n",
+			   e->src->index, e->dest->index);
+		else
+		  fprintf (dump_file,
+			   "Inserting inline check in"
+			   " block %i's edge to exit.\n",
+			   e->src->index);
+	      }
 
-	    insert_exit_check (ckseq,
-			       single_pred_edge (EXIT_BLOCK_PTR_FOR_FN (cfun)));
+	    insert_exit_check_on_edge (ckseq, e);
 	  }
 	else
 	  {
-	    gcc_checking_assert (count_noreturn == 1);
+	    gcc_checking_assert (count_chkcall == 1);
 
 	    sbitmap_iterator it;
 	    unsigned i;
-	    EXECUTE_IF_SET_IN_BITMAP (noreturn_blocks, 0, i, it)
+	    EXECUTE_IF_SET_IN_BITMAP (chkcall_blocks, 0, i, it)
 	      {
 		basic_block bb = BASIC_BLOCK_FOR_FN (cfun, i);
 
 		gimple_seq seq = ckseq;
-		gcc_checking_assert (count_noreturn > 0);
-		if (--count_noreturn)
+		gcc_checking_assert (count_chkcall > 0);
+		if (--count_chkcall)
 		  seq = gimple_seq_copy (seq);
 
 		if (dump_file)
 		  fprintf (dump_file,
-			   "Inserting inline check in noreturn block %i.\n",
+			   "Inserting inline check before stmt in block %i.\n",
 			   bb->index);
 
-		if (!insert_exit_check (seq, bb))
-		  gcc_unreachable ();
+		insert_exit_check_in_block (seq, bb);
 	      }
 
-	    gcc_checking_assert (count_noreturn == 0);
+	    gcc_checking_assert (count_chkcall == 0);
 	  }
 
 	/* The inserted ckseq computes CKFAIL at LAST.  Now we have to
@@ -708,37 +962,45 @@ public:
 
   /* Add to BB code to set its bit in VISITED, and add to RTCFG or
      CKSEQ the data or code needed to check BB's predecessors and
-     successors.  Do NOT change the CFG.  */
-  void visit (basic_block bb, bool noreturn)
+     successors.  If NORETURN, assume the block is a checkpoint,
+     whether or not it has an edge to EXIT.  If POSTCHECK, assume the
+     block post-dominates checkpoints and therefore no bitmap setting
+     or checks are to be performed in or for it.  Do NOT change the
+     CFG.  */
+  void visit (basic_block bb, bool noreturn, bool postcheck)
   {
     /* Set the bit in VISITED when entering the block.  */
     gimple_stmt_iterator gsi = gsi_after_labels (bb);
-    gsi_insert_seq_before (&gsi, vset (bb), GSI_SAME_STMT);
+    if (!postcheck)
+      gsi_insert_seq_before (&gsi, vset (bb), GSI_SAME_STMT);
 
     if (rtcfg)
       {
-	/* Build a list of (index, mask) terminated by (NULL, 0).
-	   Consolidate masks with the same index when they're
-	   adjacent.  First, predecessors.  Count backwards, because
-	   we're going to reverse the list.  The order shouldn't
-	   matter, but let's not make it surprising.  */
-	for (int i = EDGE_COUNT (bb->preds); i--; )
-	  if (push_rtcfg_pair (EDGE_PRED (bb, i)->src, bb,
-			       ENTRY_BLOCK_PTR_FOR_FN (cfun)))
-	    break;
-	rtcfg = tree_cons (NULL_TREE, build_int_cst (vword_type, 0), rtcfg);
-
-	/* Then, successors.  */
-	if (!noreturn
-	    || !push_rtcfg_pair (EXIT_BLOCK_PTR_FOR_FN (cfun),
-				 bb, EXIT_BLOCK_PTR_FOR_FN (cfun)))
-	  for (int i = EDGE_COUNT (bb->succs); i--; )
-	    if (push_rtcfg_pair (EDGE_SUCC (bb, i)->dest, bb,
-				 EXIT_BLOCK_PTR_FOR_FN (cfun)))
-	      break;
+	if (!postcheck)
+	  {
+	    /* Build a list of (index, mask) terminated by (NULL, 0).
+	       Consolidate masks with the same index when they're
+	       adjacent.  First, predecessors.  Count backwards, because
+	       we're going to reverse the list.  The order shouldn't
+	       matter, but let's not make it surprising.  */
+	    for (int i = EDGE_COUNT (bb->preds); i--; )
+	      if (push_rtcfg_pair (EDGE_PRED (bb, i)->src, bb,
+				   ENTRY_BLOCK_PTR_FOR_FN (cfun)))
+		break;
+	    rtcfg = tree_cons (NULL_TREE, build_int_cst (vword_type, 0), rtcfg);
+
+	    /* Then, successors.  */
+	    if (!noreturn
+		|| !push_rtcfg_pair (EXIT_BLOCK_PTR_FOR_FN (cfun),
+				     bb, EXIT_BLOCK_PTR_FOR_FN (cfun)))
+	      for (int i = EDGE_COUNT (bb->succs); i--; )
+		if (push_rtcfg_pair (EDGE_SUCC (bb, i)->dest, bb,
+				     EXIT_BLOCK_PTR_FOR_FN (cfun)))
+		  break;
+	  }
 	rtcfg = tree_cons (NULL_TREE, build_int_cst (vword_type, 0), rtcfg);
       }
-    else
+    else if (!postcheck)
       {
 	/* Schedule test to fail if the block was reached but somehow none
 	   of its predecessors were.  */
@@ -802,8 +1064,8 @@ pass_harden_control_flow_redundancy::execute (function *fun)
     = (flag_exceptions
        && check_before_noreturn_calls
        && flag_harden_control_flow_redundancy_check_noreturn >= HCFRNR_ALWAYS);
-  basic_block bb_eh_cleanup = NULL;
   basic_block bb;
+  basic_block bb_eh_cleanup = NULL;
 
   if (check_at_escaping_exceptions)
     {
@@ -839,12 +1101,13 @@ pass_harden_control_flow_redundancy::execute (function *fun)
 	      if (lookup_stmt_eh_lp (stmt) != 0)
 		continue;
 
-	      /* Don't split blocks at, nor add EH edvges to, tail
+	      /* Don't split blocks at, nor add EH edges to, tail
 		 calls, we will add verification before the call
 		 anyway.  */
 	      if (is_a <gcall *> (stmt)
 		  && (gimple_call_must_tail_p (as_a <gcall *> (stmt))
-		      || gimple_call_tail_p (as_a <gcall *> (stmt))))
+		      || gimple_call_tail_p (as_a <gcall *> (stmt))
+		      || returning_call_p (as_a <gcall *> (stmt))))
 		continue;
 
 	      if (!gsi_one_before_end_p (gsi))
@@ -947,13 +1210,18 @@ pass_harden_control_flow_redundancy::execute (function *fun)
 	}
     }
 
+  /* These record blocks with calls that are to be preceded by
+     checkpoints, such as noreturn calls (if so chosen), must-tail
+     calls, potential early-marked tail calls, and returning calls (if
+     so chosen).  */
+  int count_chkcall = 0;
+  auto_sbitmap chkcall_blocks (last_basic_block_for_fn (fun));
+  bitmap_clear (chkcall_blocks);
+
   /* We wish to add verification at blocks without successors, such as
      noreturn calls (raising or not) and the reraise at the cleanup
      block, but not other reraises: they will go through the cleanup
      block.  */
-  int count_noreturn = 0;
-  auto_sbitmap noreturn_blocks (last_basic_block_for_fn (fun));
-  bitmap_clear (noreturn_blocks);
   if (check_before_noreturn_calls)
     FOR_EACH_BB_FN (bb, fun)
       {
@@ -992,8 +1260,8 @@ pass_harden_control_flow_redundancy::execute (function *fun)
 		    print_gimple_stmt (dump_file, stmt, 0);
 		  }
 
-		if (bitmap_set_bit (noreturn_blocks, bb->index))
-		  count_noreturn++;
+		if (bitmap_set_bit (chkcall_blocks, bb->index))
+		  count_chkcall++;
 		else
 		  gcc_unreachable ();
 	      }
@@ -1033,27 +1301,27 @@ pass_harden_control_flow_redundancy::execute (function *fun)
 		print_gimple_stmt (dump_file, stmt, 0);
 	      }
 
-	    if (bitmap_set_bit (noreturn_blocks, bb->index))
-	      count_noreturn++;
+	    if (bitmap_set_bit (chkcall_blocks, bb->index))
+	      count_chkcall++;
 	    else
 	      gcc_unreachable ();
 	  }
       }
   else if (bb_eh_cleanup)
     {
-      if (bitmap_set_bit (noreturn_blocks, bb_eh_cleanup->index))
-	count_noreturn++;
+      if (bitmap_set_bit (chkcall_blocks, bb_eh_cleanup->index))
+	count_chkcall++;
       else
 	gcc_unreachable ();
     }
 
   gcc_checking_assert (!bb_eh_cleanup
-		       || bitmap_bit_p (noreturn_blocks, bb_eh_cleanup->index));
+		       || bitmap_bit_p (chkcall_blocks, bb_eh_cleanup->index));
 
   /* If we don't have edges to exit nor noreturn calls (including the
      cleanup reraise), then we may skip instrumentation: that would
      amount to a function that ends with an infinite loop.  */
-  if (!count_noreturn
+  if (!count_chkcall
       && EDGE_COUNT (EXIT_BLOCK_PTR_FOR_FN (fun)->preds) == 0)
     {
       if (dump_file)
@@ -1063,7 +1331,19 @@ pass_harden_control_flow_redundancy::execute (function *fun)
       return 0;
     }
 
-  rt_bb_visited vstd (count_noreturn);
+  /* Search for must-tail calls, early-marked potential tail calls,
+     and, if requested, returning calls.  As we introduce early
+     checks, */
+  int count_postchk = 0;
+  auto_sbitmap postchk_blocks (last_basic_block_for_fn (fun));
+  bitmap_clear (postchk_blocks);
+  chk_edges_t chk_edges;
+  hardcfr_sibcall_search_preds (EXIT_BLOCK_PTR_FOR_FN (fun), chk_edges,
+				count_chkcall, chkcall_blocks,
+				count_postchk, postchk_blocks,
+				NULL);
+
+  rt_bb_visited vstd (chk_edges.length () + count_chkcall);
 
   /* Visit blocks in index order, because building rtcfg depends on
      that.  Blocks must be compact, which the cleanup_cfg requirement
@@ -1076,10 +1356,11 @@ pass_harden_control_flow_redundancy::execute (function *fun)
     {
       bb = BASIC_BLOCK_FOR_FN (fun, i);
       gcc_checking_assert (bb->index == i);
-      vstd.visit (bb, bitmap_bit_p (noreturn_blocks, i));
+      vstd.visit (bb, bitmap_bit_p (chkcall_blocks, i),
+		  bitmap_bit_p (postchk_blocks, i));
     }
 
-  vstd.check (count_noreturn, noreturn_blocks);
+  vstd.check (chk_edges, count_chkcall, chkcall_blocks);
 
   return
     TODO_update_ssa
diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-notail.c b/gcc/testsuite/c-c++-common/torture/harden-cfr-notail.c
new file mode 100644
index 00000000000..98b745f16e9
--- /dev/null
+++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-notail.c
@@ -0,0 +1,8 @@
+/* { dg-do compile } */
+/* { dg-options "-fharden-control-flow-redundancy -fno-hardcfr-check-exceptions -fno-hardcfr-check-returning-calls -fdump-tree-hardcfr -ffat-lto-objects" } */
+
+#include "harden-cfr-tail.c"
+
+/* Inline checking after the calls, disabling tail calling.  */
+/* { dg-final { scan-tree-dump-times "__builtin_trap" 3 "hardcfr" } } */
+/* { dg-final { scan-tree-dump-times "Inserting inline check before stmt" 0 "hardcfr" } } */
diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-tail.c b/gcc/testsuite/c-c++-common/torture/harden-cfr-tail.c
index 09daa70fa3a..2f956b7af32 100644
--- a/gcc/testsuite/c-c++-common/torture/harden-cfr-tail.c
+++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-tail.c
@@ -1,17 +1,74 @@
 /* { dg-do compile } */
-/* { dg-options "-fharden-control-flow-redundancy -fno-hardcfr-check-exceptions -fdump-tree-hardcfr -ffat-lto-objects" } */
+/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-returning-calls -fno-hardcfr-check-exceptions -fdump-tree-hardcfr -ffat-lto-objects -Wno-return-type" } */
 
 /* We'd like to check that we insert checking so as to not disrupt tail calls,
-   but unfortunately mandatory tail calls are not available in C, and optimizing
-   calls as tail calls only takes place after hardcfr.  */
+   but unfortunately mandatory tail calls are not available in C, and
+   optimizing calls as tail calls only takes place after hardcfr, so we insert
+   checking before calls followed by return stmts with the same return value,
+   because they might end up as tail calls.  */
 
 extern int g(int i);
 
 int f(int i) {
+  /* Inline check before the returning call.  */
   return g (i);
 }
 
-/* Inline checking before the tail call.  */
-/* { dg-final { scan-tree-dump-times "__builtin_trap" 1 "hardcfr" } } */
-/* Inline checking before the tail call.  */
-/* { dg-final { scan-tree-dump-times "\\\[tail\]" 1 "hardcfr" { xfail *-*-* } } } */
+extern void g2(int i);
+
+void f2(int i) {
+  /* Inline check before the returning call, that ignores the returned value,
+     matching the value-less return.  */
+  g2 (i);
+  return;
+}
+
+void f3(int i) {
+  /* Inline check before the returning call.  */
+  g (i);
+}
+
+int f4(int i) {
+  /* Inline check before the returning call, that disregards its return
+     value.  */
+  g2 (i);
+  /* Implicit return without value, despite the return type; this combination
+     enables tail-calling of g2, and is recognized as a returning call.  */
+}
+
+int f5(int i) {
+  /* Inline check before the returning call, that doesn't return anything.  */
+  g (i);
+  /* Implicit return without value, despite the return type; this combination
+     enables tail-calling of g, and is recognized as a returning call.  */
+}
+
+void f6(int i) {
+  if (i)
+    /* Out-of-line check before the returning call.  */
+    return g2 (i);
+  /* Out-of-line check before implicit return.  */
+}
+
+int f7(int i) {
+  if (i)
+    /* Out-of-line check before the returning call.  */
+    return g (i);
+  /* Out-of-line check before implicit return.  */
+}
+
+int f8(int i) {
+  /* Not regarded as a returning call, returning value other than callee's
+     returned value.  */
+  g (i);
+  /* Inline check after the non-returning call.  */
+  return i;
+}
+
+/* Out-of-line checks in f6 and f7, before returning calls and before return.  */
+/* { dg-final { scan-tree-dump-times "hardcfr_check" 4 "hardcfr" } } */
+/* Inline checking in all other functions.  */
+/* { dg-final { scan-tree-dump-times "__builtin_trap" 6 "hardcfr" } } */
+/* Check before tail-call in all but f8, but f6 and f7 are out-of-line.  */
+/* { dg-final { scan-tree-dump-times "Inserting inline check before stmt" 5 "hardcfr" } } */
+/* { dg-final { scan-tree-dump-times "Inserting out-of-line check before stmt" 2 "hardcfr" } } */

             reply	other threads:[~2022-09-01  4:22 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-01  4:22 Alexandre Oliva [this message]
  -- strict thread matches above, loose matches on Subject: below --
2022-09-03 23:58 Alexandre Oliva
2022-09-03 23:06 Alexandre Oliva
2022-09-02 23:34 Alexandre Oliva
2022-09-01  6:10 Alexandre Oliva
2022-09-01  5:20 Alexandre Oliva
2022-09-01  4:39 Alexandre Oliva
2022-09-01  4:05 Alexandre Oliva
2022-09-01  2:03 Alexandre Oliva
2022-08-27  3:37 Alexandre Oliva
2022-08-27  2:58 Alexandre Oliva
2022-08-27  2:55 Alexandre Oliva

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220901042204.1ADD7382EA0A@sourceware.org \
    --to=aoliva@gcc.gnu.org \
    --cc=gcc-cvs@gcc.gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).