From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 2140) id 1A35D385828E; Sat, 3 Sep 2022 23:58:17 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 1A35D385828E DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1662249497; bh=VeapIbJWr6xR7Iq0mt2rWy4LvRDQIMnNSIoTllI5GHU=; h=From:To:Subject:Date:From; b=l4Skc1IunnZToO8ix2ZzDhYuOqJzdFw7J4Um9uP11CE8JUhYom8KHk6KpHSf/H0GF 0Ps0s2A2NKZCE44tMohZR6F/PBuFzz574HaDfE5ZuhNDygIniuy9KInjD+WBNn+Soz gFvZL5pfs2l1jKf4BEdyocK+XbVl/BulN6i9qsho= Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: Alexandre Oliva To: gcc-cvs@gcc.gnu.org Subject: [gcc(refs/users/aoliva/heads/testme)] hardcfr: check before potential sibcalls X-Act-Checkin: gcc X-Git-Author: Alexandre Oliva X-Git-Refname: refs/users/aoliva/heads/testme X-Git-Oldrev: 6ae17fd3067c6741f9178e7a17da193190480608 X-Git-Newrev: d3696cfc39a6f429ae9a2cc930f95ad35d66a272 Message-Id: <20220903235817.1A35D385828E@sourceware.org> Date: Sat, 3 Sep 2022 23:58:17 +0000 (GMT) List-Id: https://gcc.gnu.org/g:d3696cfc39a6f429ae9a2cc930f95ad35d66a272 commit d3696cfc39a6f429ae9a2cc930f95ad35d66a272 Author: Alexandre Oliva Date: Fri Sep 2 20:33:15 2022 -0300 hardcfr: check before potential sibcalls Diff: --- .../doc/gnat_rm/security_hardening_features.rst | 140 +++++- gcc/common.opt | 4 + gcc/doc/invoke.texi | 42 +- gcc/gimple-harden-control-flow.cc | 553 ++++++++++++++++----- .../c-c++-common/torture/harden-cfr-noret.c | 2 +- .../c-c++-common/torture/harden-cfr-notail.c | 8 + .../c-c++-common/torture/harden-cfr-tail.c | 55 +- .../g++.dg/harden-cfr-throw-returning-O0.C | 10 + .../torture/harden-cfr-noret-always-no-nothrow.C | 4 +- .../torture/harden-cfr-noret-never-no-nothrow.C | 5 +- .../g++.dg/torture/harden-cfr-noret-no-nothrow.C | 5 +- .../g++.dg/torture/harden-cfr-throw-always.C | 2 +- .../g++.dg/torture/harden-cfr-throw-nocleanup.C | 2 +- .../g++.dg/torture/harden-cfr-throw-returning.C | 31 ++ gcc/testsuite/g++.dg/torture/harden-cfr-throw.C | 4 +- gcc/testsuite/gcc.dg/torture/harden-cfr-tail-ub.c | 40 ++ 16 files changed, 742 insertions(+), 165 deletions(-) diff --git a/gcc/ada/doc/gnat_rm/security_hardening_features.rst b/gcc/ada/doc/gnat_rm/security_hardening_features.rst index 9d762e7c8cc..54614145a97 100644 --- a/gcc/ada/doc/gnat_rm/security_hardening_features.rst +++ b/gcc/ada/doc/gnat_rm/security_hardening_features.rst @@ -263,25 +263,127 @@ For each block that is marked as visited, the mechanism checks that at least one of its predecessors, and at least one of its successors, are also marked as visited. -Verification is performed just before returns and tail calls. -Verification may also be performed before noreturn calls, whether only -nothrow ones, with :switch:`-fhardcfr-check-noreturn-calls=nothrow`, -or all of them, with :switch:`-fhardcfr-check-noreturn-calls=always`. -Furthermore, any subprogram from which an exception may escape, i.e., -that may raise or propagate an exception that isn't handled -internally, is automatically enclosed by a cleanup handler that -performs verification, unless this is disabled with -:switch:`-fno-hardcfr-check-exceptions`. When a noreturn call returns -control to its caller through an exception, verification may have -already been performed before the call, assuming -:switch:`-fhardcfr-check-noreturn-calls=always` is in effect. The -compiler arranges for already-checked noreturn calls without a -preexisting handler to bypass the implicitly-added cleanup handler and -thus the redundant check, but calls with a local handler will use it, -which modifies the set of visited blocks, and checking will take place -againwhen the caller reaches the next verification point, whether it -is a return or reraise statement after the exception is otherwise -handled, or even another noreturn call. +Verification is performed just before a subprogram returns. The +following fragment: + +.. code-block:: ada + + if X then + Y := F (Z); + return; + end if; + + +gets turned into: + +.. code-block:: ada + + type Visited_Bitmap is array (1..N) of Boolean with Pack; + Visited : Visited_Bitmap := (others => False); + -- Bitmap of visited blocks. N is the basic block count. + [...] + -- Basic block #I + Visited(I) := True; + if X then + -- Basic block #J + Visited(J) := True; + Y := F (Z); + CFR.Check (N, Visited'Access, CFG'Access); + -- CFR is a hypothetical package whose Check procedure calls + -- libgcc's __hardcfr_check, that traps if the Visited bitmap + -- does not hold a valid path in CFG, the run-time + -- representation of the control flow graph in the enclosing + -- subprogram. + return; + end if; + -- Basic block #K + Visited(K) := True; + + +Verification would also be performed before must-tail calls, and +before early-marked potential tail calls, but these do not occur in +practice, as potential tail-calls are only detected in late +optimization phases, too late for this transformation to act on it. + +In order to avoid adding verification after potential tail calls, +which would prevent tail-call optimization, we recognize returning +calls, i.e., calls whose result, if any, is returned by the calling +subprogram to its caller immediately after the call returns. +Verification is performed before such calls, whether or not they are +ultimately optimized to tail calls. This behavior is enabled by +default whenever sibcall optimization is enabled (see +:switch:`-foptimize-sibling-calls`); it may be disabled with +:switch:`-fno-hardcfr-check-returning-calls`, or enabled with +:switch:`-fhardcfr-check-returning-calls`, regardless of the +optimization, but the lack of other optimizations may prevent calls +from being recognized as returning calls: + +.. code-block:: ada + + -- -fhardcfr-check-returning-calls: CFR.Check here. + P (X); + -- -fno-hardcfr-check-returning-calls: CFR.Check here. + return; + +or: + +.. code-block:: ada + + -- -fhardcfr-check-returning-calls: CFR.Check here. + R := F (X); + -- -fno-hardcfr-check-returning-calls: CFR.Check here. + return R; + + +Any subprogram from which an exception may escape, i.e., that may +raise or propagate an exception that isn't handled internally, is +conceptually enclosed by a cleanup handler that performs verification, +unless this is disabled with :switch:`-fno-hardcfr-check-exceptions`. +With this feature enabled, a subprogram body containing: + +.. code-block:: ada + + -- ... + Y := F (X); -- May raise exceptions. + -- ... + raise E; -- Not handled internally. + -- ... + + +gets modified as follows: + +.. code-block:: ada + + begin + -- ... + Y := F (X); -- May raise exceptions. + -- ... + raise E; -- Not handled internally. + -- ... + exception + when others => + CFR.Check (N, Visited'Access, CFG'Access); + raise; + end; + + +Verification may also be performed before No_Return calls, whether +only nothrow ones, with +:switch:`-fhardcfr-check-noreturn-calls=nothrow`, or all of them, with +:switch:`-fhardcfr-check-noreturn-calls=always`. The default is +:switch:`-fhardcfr-check-noreturn-calls=never` for this feature, that +disables checking before No_Return calls. + +When a No_Return call returns control to its caller through an +exception, verification may have already been performed before the +call, assuming :switch:`-fhardcfr-check-noreturn-calls=always` is in +effect. The compiler arranges for already-checked No_Return calls +without a preexisting handler to bypass the implicitly-added cleanup +handler and thus the redundant check, but a local exception handler, +if present, will modify the set of visited blocks, and checking will +take place again when the caller reaches the next verification point, +whether it is a return or reraise statement after the exception is +otherwise handled, or even another No_Return call. The instrumentation for hardening with control flow redundancy can be observed in dump files generated by the command-line option diff --git a/gcc/common.opt b/gcc/common.opt index 455853301e3..983cb4db7c6 100644 --- a/gcc/common.opt +++ b/gcc/common.opt @@ -1795,6 +1795,10 @@ fharden-control-flow-redundancy Common Var(flag_harden_control_flow_redundancy) Optimization Harden control flow by recording and checking execution paths. +fhardcfr-check-returning-calls +Common Var(flag_harden_control_flow_redundancy_check_returning_calls) Init(-1) Optimization +Check CFR execution paths also before calls followed by returns of their results. + fhardcfr-check-exceptions Common Var(flag_harden_control_flow_redundancy_check_exceptions) Init(-1) Optimization Check CFR execution paths also when exiting a function through an exception. diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi index c47cee28eaf..32e5ba2ed2c 100644 --- a/gcc/doc/invoke.texi +++ b/gcc/doc/invoke.texi @@ -626,6 +626,7 @@ Objective-C and Objective-C++ Dialects}. -fcf-protection=@r{[}full@r{|}branch@r{|}return@r{|}none@r{|}check@r{]} @gol -fharden-compares -fharden-conditional-branches @gol -fharden-control-flow-redundancy -fhardcfr-check-exceptions @gol +-fhardcfr-check-returning-calls @gol -fhardcfr-check-noreturn-calls=@r{[}always@r{|}nothrow@r{|}never@r{]} @gol -fstack-protector -fstack-protector-all -fstack-protector-strong @gol -fstack-protector-explicit -fstack-check @gol @@ -16566,12 +16567,23 @@ conditionals. @item -fharden-control-flow-redundancy @opindex fharden-control-flow-redundancy Emit extra code to set booleans when entering basic blocks, and to -verify, at function exits (returns, before tail calls, and optionally, -before escaping exceptions with @option{-fhardcfr-check-exceptions}, and -before noreturn calls with @option{-fhardcfr-check-noreturn-calls}), and -trap when they indicate an execution path that is incompatible with the -control flow graph. Tuning options @option{--param hardcfr-max-blocks} -and @option{--param hardcfr-max-inline-blocks} are available. +verify and trap, at function exits, when the booleans do not form an +execution path that is compatible with the control flow graph. + +Verification takes place before returns, before mandatory tail calls +(see below) and, optionally, before escaping exceptions with +@option{-fhardcfr-check-exceptions}, before returning calls with +@option{-fhardcfr-check-returning-calls}, and before noreturn calls with +@option{-fhardcfr-check-noreturn-calls}). Tuning options +@option{--param hardcfr-max-blocks} and @option{--param +hardcfr-max-inline-blocks} are available. + +Tail call optimization takes place too late to affect control flow +redundancy, but calls annotated as mandatory tail calls by language +front-ends, and any calls marked early enough as potential tail calls +would also have verification issued before the call, but these +possibilities are merely theoretical, as these conditions can only be +met when using custom compiler plugins. @item -fhardcfr-check-exceptions @opindex fhardcfr-check-exceptions @@ -16582,6 +16594,24 @@ escape points, as if the function body was wrapped with a cleanup handler that performed the check and reraised. This option is enabled by default; use @option{-fno-hardcfr-check-exceptions} to disable it. +@item -fhardcfr-check-returning-calls +@opindex fhardcfr-check-returning-calls +@opindex fno-hardcfr-check-returning-calls +When @option{-fharden-control-flow-redundancy} is active, check the +recorded execution path against the control flow graph before any +function call immediately followed by a return of its result, if any, so +as to not prevent tail-call optimization, whether or not it is +ultimately optimized to a tail call. + +This option is enabled by default, whenever +@option{-foptimize-sibling-calls} is enabled, but it can be enabled (or +disabled, using its negated form) explicitly, regardless of the +optimization. Note, however, that without other optimizations, the +initially-unified return block for each function remains separated from +any preceding statements, and therefore calls will not be found to be +immediately followed by a return statement, and so they won't be +recognized nor handled as returning calls. + @item -fhardcfr-check-noreturn-calls=@r{[}always@r{|}nothrow@r{|}never@r{]} @opindex fhardcfr-check-noreturn-calls When @option{-fharden-control-flow-redundancy} is active, check the diff --git a/gcc/gimple-harden-control-flow.cc b/gcc/gimple-harden-control-flow.cc index 6d03e24d38a..7b86e68ff5f 100644 --- a/gcc/gimple-harden-control-flow.cc +++ b/gcc/gimple-harden-control-flow.cc @@ -128,6 +128,293 @@ public: } +/* Return TRUE iff CFR checks should be inserted before returning + calls. */ + +static bool +check_returning_calls_p () +{ + return + flag_harden_control_flow_redundancy_check_returning_calls > 0 + || (flag_harden_control_flow_redundancy_check_returning_calls < 0 + /* Gates pass_tail_calls. */ + && flag_optimize_sibling_calls + /* Gates pass_all_ptimizations. */ + && optimize >= 1 && !optimize_debug); +} + +/* Scan BB from the end, updating *RETPTR if given as return stmts and + copies are found. Return a call or a stmt that cannot appear after + a tail call, or NULL if the top of the block is reached without + finding any. */ + +static gimple * +hardcfr_scan_block (basic_block bb, tree **retptr) +{ + gimple_stmt_iterator gsi; + for (gsi = gsi_last_bb (bb); !gsi_end_p (gsi); gsi_prev (&gsi)) + { + gimple *stmt = gsi_stmt (gsi); + + /* Ignore labels, returns, nops, clobbers and debug stmts. */ + if (gimple_code (stmt) == GIMPLE_LABEL + || gimple_code (stmt) == GIMPLE_NOP + || gimple_code (stmt) == GIMPLE_PREDICT + || gimple_clobber_p (stmt) + || is_gimple_debug (stmt)) + continue; + + if (gimple_code (stmt) == GIMPLE_RETURN) + { + greturn *gret = as_a (stmt); + if (retptr) + { + gcc_checking_assert (!*retptr); + *retptr = gimple_return_retval_ptr (gret); + } + continue; + } + + /* Check for a call. */ + if (is_gimple_call (stmt)) + return stmt; + + /* Allow simple copies to the return value, updating the return + value to be found in earlier assignments. */ + if (retptr && *retptr && gimple_assign_single_p (stmt) + && **retptr == gimple_assign_lhs (stmt)) + { + *retptr = gimple_assign_rhs1_ptr (stmt); + continue; + } + + return stmt; + } + + /* Any other kind of stmt will prevent a tail call. */ + return NULL; +} + +/* Return TRUE iff CALL is to be preceded by a CFR checkpoint, i.e., + if it's a returning call (one whose result is ultimately returned + without intervening non-copy statements) and we're checking + returning calls, a __builtin_return call (noreturn with a path to + the exit block), a must-tail call, or a tail call. */ + +static bool +returning_call_p (gcall *call) +{ + if (!(gimple_call_noreturn_p (call) + || gimple_call_must_tail_p (call) + || gimple_call_tail_p (call) + || check_returning_calls_p ())) + return false; + + /* Quickly check that there's a path to exit compatible with a + returning call. Detect infinite loops through the counter. */ + basic_block bb = gimple_bb (call); + auto_vec path; + for (int i = n_basic_blocks_for_fn (cfun); + bb != EXIT_BLOCK_PTR_FOR_FN (cfun) && i--; + bb = single_succ (bb)) + if (!single_succ_p (bb) + || (single_succ_edge (bb)->flags & EDGE_EH) != 0) + return false; + else + path.safe_push (bb); + + /* Check the stmts in the blocks and trace the return value. */ + tree *retptr = NULL; + for (;;) + { + gcc_checking_assert (!path.is_empty ()); + basic_block bb = path.pop (); + gimple *stop = hardcfr_scan_block (bb, &retptr); + if (stop) + { + if (stop != call) + return false; + gcc_checking_assert (path.is_empty ()); + break; + } + + gphi *retphi = NULL; + if (retptr && *retptr && TREE_CODE (*retptr) == SSA_NAME + && !SSA_NAME_IS_DEFAULT_DEF (*retptr) + && SSA_NAME_DEF_STMT (*retptr) + && is_a (SSA_NAME_DEF_STMT (*retptr)) + && gimple_bb (SSA_NAME_DEF_STMT (*retptr)) == bb) + { + retphi = as_a (SSA_NAME_DEF_STMT (*retptr)); + gcc_checking_assert (gimple_phi_result (retphi) == *retptr); + } + else + continue; + + gcc_checking_assert (!path.is_empty ()); + edge e = single_succ_edge (path.last ()); + int i = EDGE_COUNT (bb->preds); + while (i--) + if (EDGE_PRED (bb, i) == e) + break; + gcc_checking_assert (i >= 0); + retptr = gimple_phi_arg_def_ptr (retphi, i); + } + + return (gimple_call_noreturn_p (call) + || gimple_call_must_tail_p (call) + || gimple_call_tail_p (call) + || (gimple_call_lhs (call) == (retptr ? *retptr : NULL) + && check_returning_calls_p ())); +} + +typedef auto_vec chk_edges_t; + +/* Declare for mutual recursion. */ +static bool hardcfr_sibcall_search_preds (basic_block bb, + chk_edges_t &chk_edges, + int &count_chkcall, + auto_sbitmap &chkcall_blocks, + int &count_postchk, + auto_sbitmap &postchk_blocks, + tree *retptr); + +/* Search backwards from the end of BB for a mandatory or potential + sibcall. Schedule the block to be handled sort-of like noreturn if + so. Recurse to preds, with updated RETPTR, if the block only + contains stmts that may follow such a call, scheduling checking at + edges and marking blocks as post-check as needed. Return true iff, + at the end of the block, a check will have already been + performed. */ + +static bool +hardcfr_sibcall_search_block (basic_block bb, + chk_edges_t &chk_edges, + int &count_chkcall, + auto_sbitmap &chkcall_blocks, + int &count_postchk, + auto_sbitmap &postchk_blocks, + tree *retptr) +{ + /* Conditionals and internal exceptions rule out tail calls. */ + if (!single_succ_p (bb) + || (single_succ_edge (bb)->flags & EDGE_EH) != 0) + return false; + + gimple *stmt = hardcfr_scan_block (bb, &retptr); + if (!stmt) + return hardcfr_sibcall_search_preds (bb, chk_edges, + count_chkcall, chkcall_blocks, + count_postchk, postchk_blocks, + retptr); + + if (!is_a (stmt)) + return false; + + /* Avoid disrupting mandatory or early-marked tail calls, + inserting the check before them. This works for + must-tail calls, but tail calling as an optimization is + detected too late for us. + + Also check for noreturn calls here. Noreturn calls won't + normally have edges to exit, so they won't be found here, + but __builtin_return does, and we must check before + it, so handle it like a tail call. */ + gcall *call = as_a (stmt); + if (!(gimple_call_noreturn_p (call) + || gimple_call_must_tail_p (call) + || gimple_call_tail_p (call) + || (gimple_call_lhs (call) == (retptr ? *retptr : NULL) + && check_returning_calls_p ()))) + return false; + + gcc_checking_assert (returning_call_p (call)); + + /* We found a call that is to be preceded by checking. */ + if (bitmap_set_bit (chkcall_blocks, bb->index)) + ++count_chkcall; + else + gcc_unreachable (); + return true; +} + + +/* Search preds of BB for a mandatory or potential sibcall or + returning call, and arrange for the blocks containing them to have + a check inserted before the call, like noreturn calls. If any + preds are found to perform checking, schedule checks at the edges + of those that don't, and mark BB as postcheck.. */ + +static bool +hardcfr_sibcall_search_preds (basic_block bb, + chk_edges_t &chk_edges, + int &count_chkcall, + auto_sbitmap &chkcall_blocks, + int &count_postchk, + auto_sbitmap &postchk_blocks, + tree *retptr) +{ + /* For the exit block, we wish to force a check at every + predecessor, so pretend we've already found a pred that had + checking, so that we schedule checking at every one of its pred + edges. */ + bool first = bb->index >= NUM_FIXED_BLOCKS; + bool postchecked = true; + + gphi *retphi = NULL; + if (retptr && *retptr && TREE_CODE (*retptr) == SSA_NAME + && !SSA_NAME_IS_DEFAULT_DEF (*retptr) + && SSA_NAME_DEF_STMT (*retptr) + && is_a (SSA_NAME_DEF_STMT (*retptr)) + && gimple_bb (SSA_NAME_DEF_STMT (*retptr)) == bb) + { + retphi = as_a (SSA_NAME_DEF_STMT (*retptr)); + gcc_checking_assert (gimple_phi_result (retphi) == *retptr); + } + + for (int i = EDGE_COUNT (bb->preds); i--; first = false) + { + edge e = EDGE_PRED (bb, i); + + bool checked + = hardcfr_sibcall_search_block (e->src, chk_edges, + count_chkcall, chkcall_blocks, + count_postchk, postchk_blocks, + !retphi ? retptr + : gimple_phi_arg_def_ptr (retphi, i)); + + if (first) + { + postchecked = checked; + continue; + } + + /* When we first find a checked block, force a check at every + other incoming edge we've already visited, and those we + visit afterwards that don't have their own check, so that + when we reach BB, the check has already been performed. */ + if (!postchecked && checked) + { + for (int j = EDGE_COUNT (bb->preds); --j > i; ) + chk_edges.safe_push (EDGE_PRED (bb, j)); + postchecked = true; + } + if (postchecked && !checked) + chk_edges.safe_push (EDGE_PRED (bb, i)); + } + + if (postchecked && bb->index >= NUM_FIXED_BLOCKS) + { + if (bitmap_set_bit (postchk_blocks, bb->index)) + count_postchk++; + else + gcc_unreachable (); + } + + return postchecked; +} + + class rt_bb_visited { /* Use a sufficiently wide unsigned type to hold basic block numbers. */ @@ -263,7 +550,7 @@ class rt_bb_visited public: /* Prepare to add control flow redundancy testing to CFUN. */ - rt_bb_visited (int noreturn_blocks) + rt_bb_visited (int checkpoints) : nblocks (n_basic_blocks_for_fn (cfun)), vword_type (NULL), ckseq (NULL), rtcfg (NULL) { @@ -347,8 +634,7 @@ public: gimple_seq_add_stmt (&ckseq, detach); if (nblocks - NUM_FIXED_BLOCKS > blknum (param_hardcfr_max_inline_blocks) - || (EDGE_COUNT (EXIT_BLOCK_PTR_FOR_FN (cfun)->preds) - + noreturn_blocks > 1)) + || checkpoints > 1) { /* Make sure vword_bits is wide enough for the representation of nblocks in rtcfg. Compare with vword_bits << vword_bits, @@ -373,63 +659,32 @@ public: gimple_seq_add_stmt (&ckseq, ckfail_init); } - /* Insert SEQ before a resx, or noreturn or tail call at the end of - INSBB, and return TRUE, otherwise return FALSE. */ - bool insert_exit_check (gimple_seq seq, basic_block insbb) + /* Insert SEQ before a resx or a call in INSBB. */ + void insert_exit_check_in_block (gimple_seq seq, basic_block insbb) { - /* If the returning block ends with a noreturn call, insert - checking before it. This is particularly important for - __builtin_return. Other noreturn calls won't have an edge to - the exit block, so they won't even be considered as exit paths. - - Insert-on-edge inserts before other return stmts, but not - before calls, and if a single-block function had the check - sequence inserted after a noreturn call, it would be useless, - but checking would still flag it as malformed if block 2 has a - fallthru edge to the exit block. - - Also avoid disrupting tail calls, inserting the check before - them. This works for must-tail calls, but tail calling as an - optimization is detected too late for us. */ gimple_stmt_iterator gsi = gsi_last_bb (insbb); - gimple *ret = gsi_stmt (gsi); - if (ret && is_a (ret)) - { - gsi_insert_seq_before (&gsi, seq, GSI_SAME_STMT); - return true; - } - - if (ret && is_a (ret)) - { + while (!gsi_end_p (gsi)) + if (is_a (gsi_stmt (gsi)) + || is_a (gsi_stmt (gsi))) + break; + else gsi_prev (&gsi); - if (!gsi_end_p (gsi)) - ret = gsi_stmt (gsi); - } - if (ret - && is_a (ret) - && (gimple_call_noreturn_p (ret) - || gimple_call_must_tail_p (as_a (ret)) - || gimple_call_tail_p (as_a (ret)))) - gsi_insert_seq_before (&gsi, seq, GSI_SAME_STMT); - else - return false; - return true; + gsi_insert_seq_before (&gsi, seq, GSI_SAME_STMT); } - /* Insert SEQ on E, or close enough (e.g., before a noreturn or tail - call at the end of E->src). */ - void insert_exit_check (gimple_seq seq, edge e) + /* Insert SEQ on E. */ + void insert_exit_check_on_edge (gimple_seq seq, edge e) { - if (!insert_exit_check (seq, e->src)) - gsi_insert_seq_on_edge_immediate (e, seq); + gsi_insert_seq_on_edge_immediate (e, seq); } - /* Add checking code on every exit edge, and initialization code on + /* Add checking code to CHK_EDGES, and initialization code on the entry edge. Before this point, the CFG has been undisturbed, and all the needed data has been collected and safely stowed. */ - void check (int count_noreturn, auto_sbitmap const &noreturn_blocks) + void check (chk_edges_t &chk_edges, + int count_chkcall, auto_sbitmap const &chkcall_blocks) { /* If we're using out-of-line checking, create and statically initialize the CFG checking representation, generate the @@ -491,93 +746,115 @@ public: rtcfg)); gimple_seq_add_stmt (&ckseq, call_chk); + gimple *clobber = gimple_build_assign (visited, + build_clobber + (TREE_TYPE (visited))); + gimple_seq_add_stmt (&ckseq, clobber); + /* If we have multiple exit edges, insert (copies of) ckseq in all of them. */ - for (int i = EDGE_COUNT (EXIT_BLOCK_PTR_FOR_FN (cfun)->preds); - i--; ) + for (int i = chk_edges.length (); i--; ) { gimple_seq seq = ckseq; /* Copy the sequence, unless we're dealing with the last edge (we're counting down to zero). */ - if (i || count_noreturn) + if (i || count_chkcall) seq = gimple_seq_copy (seq); - edge e = EDGE_PRED (EXIT_BLOCK_PTR_FOR_FN (cfun), i); + edge e = chk_edges[i]; if (dump_file) - fprintf (dump_file, - "Inserting out-of-line check in" - " block %i's edge to exit.\n", - e->src->index); + { + if (e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun)) + fprintf (dump_file, + "Inserting out-of-line check in" + " block %i's edge to exit.\n", + e->src->index); + else + fprintf (dump_file, + "Inserting out-of-line check in" + " block %i's edge to postcheck block %i.\n", + e->src->index, e->dest->index); + } - insert_exit_check (seq, e); + insert_exit_check_on_edge (seq, e); - gcc_checking_assert (!bitmap_bit_p (noreturn_blocks, e->src->index)); + gcc_checking_assert (!bitmap_bit_p (chkcall_blocks, e->src->index)); } sbitmap_iterator it; unsigned i; - EXECUTE_IF_SET_IN_BITMAP (noreturn_blocks, 0, i, it) + EXECUTE_IF_SET_IN_BITMAP (chkcall_blocks, 0, i, it) { basic_block bb = BASIC_BLOCK_FOR_FN (cfun, i); gimple_seq seq = ckseq; - gcc_checking_assert (count_noreturn > 0); - if (--count_noreturn) + gcc_checking_assert (count_chkcall > 0); + if (--count_chkcall) seq = gimple_seq_copy (seq); if (dump_file) fprintf (dump_file, - "Inserting out-of-line check in noreturn block %i.\n", + "Inserting out-of-line check before stmt in block %i.\n", bb->index); - if (!insert_exit_check (seq, bb)) - gcc_unreachable (); + insert_exit_check_in_block (seq, bb); } - gcc_checking_assert (count_noreturn == 0); + gcc_checking_assert (count_chkcall == 0); } else { /* Inline checking requires a single exit edge. */ - gimple *last = gsi_stmt (gsi_last (ckseq)); + gimple *last = gimple_build_assign (visited, + build_clobber + (TREE_TYPE (visited))); + gimple_seq_add_stmt (&ckseq, last); - if (!count_noreturn) + if (!count_chkcall) { + edge e = single_pred_edge (EXIT_BLOCK_PTR_FOR_FN (cfun)); + if (dump_file) - fprintf (dump_file, - "Inserting inline check in" - " block %i's edge to exit.\n", - single_pred (EXIT_BLOCK_PTR_FOR_FN (cfun))->index); + { + if (e->dest == EXIT_BLOCK_PTR_FOR_FN (cfun)) + fprintf (dump_file, + "Inserting out-of-line check in" + " block %i's edge to postcheck block %i.\n", + e->src->index, e->dest->index); + else + fprintf (dump_file, + "Inserting inline check in" + " block %i's edge to exit.\n", + e->src->index); + } - insert_exit_check (ckseq, - single_pred_edge (EXIT_BLOCK_PTR_FOR_FN (cfun))); + insert_exit_check_on_edge (ckseq, e); } else { - gcc_checking_assert (count_noreturn == 1); + gcc_checking_assert (count_chkcall == 1); sbitmap_iterator it; unsigned i; - EXECUTE_IF_SET_IN_BITMAP (noreturn_blocks, 0, i, it) + EXECUTE_IF_SET_IN_BITMAP (chkcall_blocks, 0, i, it) { basic_block bb = BASIC_BLOCK_FOR_FN (cfun, i); gimple_seq seq = ckseq; - gcc_checking_assert (count_noreturn > 0); - if (--count_noreturn) + gcc_checking_assert (count_chkcall > 0); + if (--count_chkcall) seq = gimple_seq_copy (seq); if (dump_file) fprintf (dump_file, - "Inserting inline check in noreturn block %i.\n", + "Inserting inline check before stmt in block %i.\n", bb->index); - if (!insert_exit_check (seq, bb)) - gcc_unreachable (); + insert_exit_check_in_block (seq, bb); } - gcc_checking_assert (count_noreturn == 0); + gcc_checking_assert (count_chkcall == 0); } /* The inserted ckseq computes CKFAIL at LAST. Now we have to @@ -708,37 +985,48 @@ public: /* Add to BB code to set its bit in VISITED, and add to RTCFG or CKSEQ the data or code needed to check BB's predecessors and - successors. Do NOT change the CFG. */ - void visit (basic_block bb, bool noreturn) + successors. If NORETURN, assume the block is a checkpoint, + whether or not it has an edge to EXIT. If POSTCHECK, assume the + block post-dominates checkpoints and therefore no bitmap setting + or checks are to be performed in or for it. Do NOT change the + CFG. */ + void visit (basic_block bb, bool noreturn, bool postcheck) { /* Set the bit in VISITED when entering the block. */ gimple_stmt_iterator gsi = gsi_after_labels (bb); - gsi_insert_seq_before (&gsi, vset (bb), GSI_SAME_STMT); + if (!postcheck) + gsi_insert_seq_before (&gsi, vset (bb), GSI_SAME_STMT); if (rtcfg) { - /* Build a list of (index, mask) terminated by (NULL, 0). - Consolidate masks with the same index when they're - adjacent. First, predecessors. Count backwards, because - we're going to reverse the list. The order shouldn't - matter, but let's not make it surprising. */ - for (int i = EDGE_COUNT (bb->preds); i--; ) - if (push_rtcfg_pair (EDGE_PRED (bb, i)->src, bb, - ENTRY_BLOCK_PTR_FOR_FN (cfun))) - break; + if (!postcheck) + { + /* Build a list of (index, mask) terminated by (NULL, 0). + Consolidate masks with the same index when they're + adjacent. First, predecessors. Count backwards, because + we're going to reverse the list. The order shouldn't + matter, but let's not make it surprising. */ + for (int i = EDGE_COUNT (bb->preds); i--; ) + if (push_rtcfg_pair (EDGE_PRED (bb, i)->src, bb, + ENTRY_BLOCK_PTR_FOR_FN (cfun))) + break; + } rtcfg = tree_cons (NULL_TREE, build_int_cst (vword_type, 0), rtcfg); - /* Then, successors. */ - if (!noreturn - || !push_rtcfg_pair (EXIT_BLOCK_PTR_FOR_FN (cfun), - bb, EXIT_BLOCK_PTR_FOR_FN (cfun))) - for (int i = EDGE_COUNT (bb->succs); i--; ) - if (push_rtcfg_pair (EDGE_SUCC (bb, i)->dest, bb, - EXIT_BLOCK_PTR_FOR_FN (cfun))) - break; + if (!postcheck) + { + /* Then, successors. */ + if (!noreturn + || !push_rtcfg_pair (EXIT_BLOCK_PTR_FOR_FN (cfun), + bb, EXIT_BLOCK_PTR_FOR_FN (cfun))) + for (int i = EDGE_COUNT (bb->succs); i--; ) + if (push_rtcfg_pair (EDGE_SUCC (bb, i)->dest, bb, + EXIT_BLOCK_PTR_FOR_FN (cfun))) + break; + } rtcfg = tree_cons (NULL_TREE, build_int_cst (vword_type, 0), rtcfg); } - else + else if (!postcheck) { /* Schedule test to fail if the block was reached but somehow none of its predecessors were. */ @@ -802,8 +1090,8 @@ pass_harden_control_flow_redundancy::execute (function *fun) = (flag_exceptions && check_before_noreturn_calls && flag_harden_control_flow_redundancy_check_noreturn >= HCFRNR_ALWAYS); - basic_block bb_eh_cleanup = NULL; basic_block bb; + basic_block bb_eh_cleanup = NULL; if (check_at_escaping_exceptions) { @@ -839,12 +1127,13 @@ pass_harden_control_flow_redundancy::execute (function *fun) if (lookup_stmt_eh_lp (stmt) != 0) continue; - /* Don't split blocks at, nor add EH edvges to, tail + /* Don't split blocks at, nor add EH edges to, tail calls, we will add verification before the call anyway. */ if (is_a (stmt) && (gimple_call_must_tail_p (as_a (stmt)) - || gimple_call_tail_p (as_a (stmt)))) + || gimple_call_tail_p (as_a (stmt)) + || returning_call_p (as_a (stmt)))) continue; if (!gsi_one_before_end_p (gsi)) @@ -947,13 +1236,18 @@ pass_harden_control_flow_redundancy::execute (function *fun) } } + /* These record blocks with calls that are to be preceded by + checkpoints, such as noreturn calls (if so chosen), must-tail + calls, potential early-marked tail calls, and returning calls (if + so chosen). */ + int count_chkcall = 0; + auto_sbitmap chkcall_blocks (last_basic_block_for_fn (fun)); + bitmap_clear (chkcall_blocks); + /* We wish to add verification at blocks without successors, such as noreturn calls (raising or not) and the reraise at the cleanup block, but not other reraises: they will go through the cleanup block. */ - int count_noreturn = 0; - auto_sbitmap noreturn_blocks (last_basic_block_for_fn (fun)); - bitmap_clear (noreturn_blocks); if (check_before_noreturn_calls) FOR_EACH_BB_FN (bb, fun) { @@ -992,8 +1286,8 @@ pass_harden_control_flow_redundancy::execute (function *fun) print_gimple_stmt (dump_file, stmt, 0); } - if (bitmap_set_bit (noreturn_blocks, bb->index)) - count_noreturn++; + if (bitmap_set_bit (chkcall_blocks, bb->index)) + count_chkcall++; else gcc_unreachable (); } @@ -1033,27 +1327,27 @@ pass_harden_control_flow_redundancy::execute (function *fun) print_gimple_stmt (dump_file, stmt, 0); } - if (bitmap_set_bit (noreturn_blocks, bb->index)) - count_noreturn++; + if (bitmap_set_bit (chkcall_blocks, bb->index)) + count_chkcall++; else gcc_unreachable (); } } else if (bb_eh_cleanup) { - if (bitmap_set_bit (noreturn_blocks, bb_eh_cleanup->index)) - count_noreturn++; + if (bitmap_set_bit (chkcall_blocks, bb_eh_cleanup->index)) + count_chkcall++; else gcc_unreachable (); } gcc_checking_assert (!bb_eh_cleanup - || bitmap_bit_p (noreturn_blocks, bb_eh_cleanup->index)); + || bitmap_bit_p (chkcall_blocks, bb_eh_cleanup->index)); /* If we don't have edges to exit nor noreturn calls (including the cleanup reraise), then we may skip instrumentation: that would amount to a function that ends with an infinite loop. */ - if (!count_noreturn + if (!count_chkcall && EDGE_COUNT (EXIT_BLOCK_PTR_FOR_FN (fun)->preds) == 0) { if (dump_file) @@ -1063,7 +1357,27 @@ pass_harden_control_flow_redundancy::execute (function *fun) return 0; } - rt_bb_visited vstd (count_noreturn); + /* Search for must-tail calls, early-marked potential tail calls, + and, if requested, returning calls. As we introduce early + checks, */ + int count_postchk = 0; + auto_sbitmap postchk_blocks (last_basic_block_for_fn (fun)); + bitmap_clear (postchk_blocks); + chk_edges_t chk_edges; + hardcfr_sibcall_search_preds (EXIT_BLOCK_PTR_FOR_FN (fun), chk_edges, + count_chkcall, chkcall_blocks, + count_postchk, postchk_blocks, + NULL); + + rt_bb_visited vstd (chk_edges.length () + count_chkcall); + + auto_sbitmap combined_blocks (last_basic_block_for_fn (fun)); + bitmap_copy (combined_blocks, chkcall_blocks); + int i; + edge *e; + FOR_EACH_VEC_ELT (chk_edges, i, e) + if (!bitmap_set_bit (combined_blocks, (*e)->src->index)) + gcc_unreachable (); /* Visit blocks in index order, because building rtcfg depends on that. Blocks must be compact, which the cleanup_cfg requirement @@ -1076,10 +1390,11 @@ pass_harden_control_flow_redundancy::execute (function *fun) { bb = BASIC_BLOCK_FOR_FN (fun, i); gcc_checking_assert (bb->index == i); - vstd.visit (bb, bitmap_bit_p (noreturn_blocks, i)); + vstd.visit (bb, bitmap_bit_p (combined_blocks, i), + bitmap_bit_p (postchk_blocks, i)); } - vstd.check (count_noreturn, noreturn_blocks); + vstd.check (chk_edges, count_chkcall, chkcall_blocks); return TODO_update_ssa diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-noret.c b/gcc/testsuite/c-c++-common/torture/harden-cfr-noret.c index a58afd7944c..fdd803109a4 100644 --- a/gcc/testsuite/c-c++-common/torture/harden-cfr-noret.c +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-noret.c @@ -7,7 +7,7 @@ #define ATTR_NOTHROW_OPT __attribute__ ((__nothrow__)) #endif -extern void __attribute__ ((__noreturn__)) ATTR_NOTHROW_OPT g(void); +extern void __attribute__ ((__noreturn__)) ATTR_NOTHROW_OPT g (void); void f(int i) { if (i) diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-notail.c b/gcc/testsuite/c-c++-common/torture/harden-cfr-notail.c new file mode 100644 index 00000000000..6d11487bbba --- /dev/null +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-notail.c @@ -0,0 +1,8 @@ +/* { dg-do compile } */ +/* { dg-options "-fharden-control-flow-redundancy -fno-hardcfr-check-exceptions -fno-hardcfr-check-returning-calls -fdump-tree-hardcfr -ffat-lto-objects" } */ + +#include "harden-cfr-tail.c" + +/* Inline checking after the calls, disabling tail calling. */ +/* { dg-final { scan-tree-dump-times "__builtin_trap" 5 "hardcfr" } } */ +/* { dg-final { scan-tree-dump-times "Inserting inline check before stmt" 0 "hardcfr" } } */ diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-tail.c b/gcc/testsuite/c-c++-common/torture/harden-cfr-tail.c index 09daa70fa3a..d5467eafa9f 100644 --- a/gcc/testsuite/c-c++-common/torture/harden-cfr-tail.c +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-tail.c @@ -1,17 +1,52 @@ /* { dg-do compile } */ -/* { dg-options "-fharden-control-flow-redundancy -fno-hardcfr-check-exceptions -fdump-tree-hardcfr -ffat-lto-objects" } */ +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-returning-calls -fno-hardcfr-check-exceptions -fdump-tree-hardcfr -ffat-lto-objects -Wno-return-type" } */ -/* We'd like to check that we insert checking so as to not disrupt tail calls, - but unfortunately mandatory tail calls are not available in C, and optimizing - calls as tail calls only takes place after hardcfr. */ +/* Check that we insert CFR checking so as to not disrupt tail calls. + Mandatory tail calls are not available in C, and optimizing calls as tail + calls only takes place after hardcfr, so we insert checking before calls + followed by copies and return stmts with the same return value, that might + (or might not) end up optimized to tail calls. */ -extern int g(int i); +extern int g (int i); -int f(int i) { +int f1(int i) { + /* Inline check before the returning call. */ return g (i); } -/* Inline checking before the tail call. */ -/* { dg-final { scan-tree-dump-times "__builtin_trap" 1 "hardcfr" } } */ -/* Inline checking before the tail call. */ -/* { dg-final { scan-tree-dump-times "\\\[tail\]" 1 "hardcfr" { xfail *-*-* } } } */ +extern void g2 (int i); + +void f2(int i) { + /* Inline check before the returning call, that ignores the returned value, + matching the value-less return. */ + g2 (i); + return; +} + +void f3(int i) { + /* Inline check before the returning call. */ + g (i); +} + +void f4(int i) { + if (i) + /* Out-of-line check before the returning call. */ + return g2 (i); + /* Out-of-line check before implicit return. */ +} + +int f5(int i) { + /* Not regarded as a returning call, returning value other than callee's + returned value. */ + g (i); + /* Inline check after the non-returning call. */ + return i; +} + +/* Out-of-line checks in f4, before returning calls and before return. */ +/* { dg-final { scan-tree-dump-times "hardcfr_check" 2 "hardcfr" } } */ +/* Inline checking in all other functions. */ +/* { dg-final { scan-tree-dump-times "__builtin_trap" 4 "hardcfr" } } */ +/* Check before tail-call in all but f5, but f4 is out-of-line. */ +/* { dg-final { scan-tree-dump-times "Inserting inline check before stmt" 3 "hardcfr" } } */ +/* { dg-final { scan-tree-dump-times "Inserting out-of-line check before stmt" 1 "hardcfr" } } */ diff --git a/gcc/testsuite/g++.dg/harden-cfr-throw-returning-O0.C b/gcc/testsuite/g++.dg/harden-cfr-throw-returning-O0.C new file mode 100644 index 00000000000..f0338ccc361 --- /dev/null +++ b/gcc/testsuite/g++.dg/harden-cfr-throw-returning-O0.C @@ -0,0 +1,10 @@ +/* { dg-do compile } */ +/* { dg-options "-fharden-control-flow-redundancy -foptimize-sibling-calls -fdump-tree-hardcfr -O0" } */ + +/* -fhardcfr-check-returning-calls gets implicitly disabled because, + -at O0, -foptimize-sibling-calls has no effect. */ + +#include "torture/harden-cfr-throw.C" + +/* { dg-final { scan-tree-dump-times "hardcfr_check" 12 "hardcfr" } } */ +/* { dg-final { scan-tree-dump-times "builtin_trap" 1 "hardcfr" } } */ diff --git a/gcc/testsuite/g++.dg/torture/harden-cfr-noret-always-no-nothrow.C b/gcc/testsuite/g++.dg/torture/harden-cfr-noret-always-no-nothrow.C index dad9693e1d2..0d35920c7ee 100644 --- a/gcc/testsuite/g++.dg/torture/harden-cfr-noret-always-no-nothrow.C +++ b/gcc/testsuite/g++.dg/torture/harden-cfr-noret-always-no-nothrow.C @@ -9,8 +9,8 @@ /* All 3 noreturn calls. */ /* { dg-final { scan-tree-dump-times "Bypassing cleanup" 3 "hardcfr" } } */ /* Out-of-line checks in f. */ -/* { dg-final { scan-tree-dump-times "Inserting out-of-line check in noreturn block" 1 "hardcfr" } } */ +/* { dg-final { scan-tree-dump-times "Inserting out-of-line check in block \[0-9]*'s edge to exit" 1 "hardcfr" } } */ /* { dg-final { scan-tree-dump-times "hardcfr_check" 2 "hardcfr" } } */ /* Inline checks in h and h2. */ -/* { dg-final { scan-tree-dump-times "Inserting inline check in noreturn block" 2 "hardcfr" } } */ +/* { dg-final { scan-tree-dump-times "Inserting inline check before stmt" 2 "hardcfr" } } */ /* { dg-final { scan-tree-dump-times "__builtin_trap" 2 "hardcfr" } } */ diff --git a/gcc/testsuite/g++.dg/torture/harden-cfr-noret-never-no-nothrow.C b/gcc/testsuite/g++.dg/torture/harden-cfr-noret-never-no-nothrow.C index 33e1ae26f80..b7d247ff43c 100644 --- a/gcc/testsuite/g++.dg/torture/harden-cfr-noret-never-no-nothrow.C +++ b/gcc/testsuite/g++.dg/torture/harden-cfr-noret-never-no-nothrow.C @@ -10,8 +10,9 @@ /* All 3 noreturn calls. */ /* { dg-final { scan-tree-dump-times "Associated cleanup" 3 "hardcfr" } } */ /* Out-of-line checks in f. */ -/* { dg-final { scan-tree-dump-times "Inserting out-of-line check in noreturn block" 1 "hardcfr" } } */ +/* { dg-final { scan-tree-dump-times "Inserting out-of-line check in block \[0-9]*'s edge to exit" 1 "hardcfr" } } */ +/* { dg-final { scan-tree-dump-times "Inserting out-of-line check before stmt" 1 "hardcfr" } } */ /* { dg-final { scan-tree-dump-times "hardcfr_check" 2 "hardcfr" } } */ /* Inline checks in h and h2. */ -/* { dg-final { scan-tree-dump-times "Inserting inline check in noreturn block" 2 "hardcfr" } } */ +/* { dg-final { scan-tree-dump-times "Inserting inline check before stmt" 2 "hardcfr" } } */ /* { dg-final { scan-tree-dump-times "__builtin_trap" 2 "hardcfr" } } */ diff --git a/gcc/testsuite/g++.dg/torture/harden-cfr-noret-no-nothrow.C b/gcc/testsuite/g++.dg/torture/harden-cfr-noret-no-nothrow.C index b47d880ada2..62c58cfd406 100644 --- a/gcc/testsuite/g++.dg/torture/harden-cfr-noret-no-nothrow.C +++ b/gcc/testsuite/g++.dg/torture/harden-cfr-noret-no-nothrow.C @@ -15,8 +15,9 @@ void __attribute__ ((__noreturn__)) h (void); /* All 3 noreturn calls. */ /* { dg-final { scan-tree-dump-times "Associated cleanup" 3 "hardcfr" } } */ /* Out-of-line checks in f. */ -/* { dg-final { scan-tree-dump-times "Inserting out-of-line check in noreturn block" 1 "hardcfr" } } */ +/* { dg-final { scan-tree-dump-times "Inserting out-of-line check in block \[0-9]*'s edge to exit" 1 "hardcfr" } } */ +/* { dg-final { scan-tree-dump-times "Inserting out-of-line check before stmt" 1 "hardcfr" } } */ /* { dg-final { scan-tree-dump-times "hardcfr_check" 2 "hardcfr" } } */ /* Inline checks in h and h2. */ -/* { dg-final { scan-tree-dump-times "Inserting inline check in noreturn block" 2 "hardcfr" } } */ +/* { dg-final { scan-tree-dump-times "Inserting inline check before stmt" 2 "hardcfr" } } */ /* { dg-final { scan-tree-dump-times "__builtin_trap" 2 "hardcfr" } } */ diff --git a/gcc/testsuite/g++.dg/torture/harden-cfr-throw-always.C b/gcc/testsuite/g++.dg/torture/harden-cfr-throw-always.C index 52ef7bc601a..0286f6e6d3f 100644 --- a/gcc/testsuite/g++.dg/torture/harden-cfr-throw-always.C +++ b/gcc/testsuite/g++.dg/torture/harden-cfr-throw-always.C @@ -1,5 +1,5 @@ /* { dg-do compile } */ -/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-noreturn-calls=always -fdump-tree-hardcfr -ffat-lto-objects" } */ +/* { dg-options "-fharden-control-flow-redundancy -fno-hardcfr-check-returning-calls -fhardcfr-check-noreturn-calls=always -fdump-tree-hardcfr -ffat-lto-objects" } */ /* Check that we insert cleanups for checking around the bodies of maybe-throwing functions, and also checking before noreturn diff --git a/gcc/testsuite/g++.dg/torture/harden-cfr-throw-nocleanup.C b/gcc/testsuite/g++.dg/torture/harden-cfr-throw-nocleanup.C index da7c9cf1033..885b0b236af 100644 --- a/gcc/testsuite/g++.dg/torture/harden-cfr-throw-nocleanup.C +++ b/gcc/testsuite/g++.dg/torture/harden-cfr-throw-nocleanup.C @@ -1,5 +1,5 @@ /* { dg-do compile } */ -/* { dg-options "-fharden-control-flow-redundancy -fno-hardcfr-check-exceptions -fdump-tree-hardcfr -ffat-lto-objects" } */ +/* { dg-options "-fharden-control-flow-redundancy -fno-hardcfr-check-exceptions -fno-hardcfr-check-returning-calls -fdump-tree-hardcfr -ffat-lto-objects" } */ /* Check that we do not insert cleanups for checking around the bodies of maybe-throwing functions. h4 doesn't get any checks, because we diff --git a/gcc/testsuite/g++.dg/torture/harden-cfr-throw-returning.C b/gcc/testsuite/g++.dg/torture/harden-cfr-throw-returning.C new file mode 100644 index 00000000000..d150470c618 --- /dev/null +++ b/gcc/testsuite/g++.dg/torture/harden-cfr-throw-returning.C @@ -0,0 +1,31 @@ +/* { dg-do compile } */ +/* { dg-options "-fharden-control-flow-redundancy -foptimize-sibling-calls -fdump-tree-hardcfr -ffat-lto-objects" } */ + +/* Check that we insert cleanups for checking around the bodies of + maybe-throwing functions. These results depend on checking before + returning calls, which is only enabled when sibcall optimizations + are enabled, so change the optimization mode to -O1 for f and f2, + so that -foptimize-sibling-calls can take effect and enable + -fhardcfr-check-returning-calls, so that we get the same results. + There is a separate test for -O0. */ + +#if ! __OPTIMIZE__ +void __attribute__ ((__optimize__ (1))) f(int i); +void __attribute__ ((__optimize__ (1))) f2(int i); +void __attribute__ ((__optimize__ (1))) h3(void); +#endif + +#include "harden-cfr-throw.C" + +/* f gets out-of-line checks before the unwrapped tail call and in the + else edge. */ +/* f2 gets out-of-line checks before both unwrapped tail calls. */ +/* h gets out-of-line checks before the implicit return and in the + cleanup block. */ +/* h2 and h2b get out-of-line checks before the cleanup returning + call, and in the cleanup block. */ +/* h3 gets an inline check before the __cxa_end_catch returning call. */ +/* h4 gets an inline check in the cleanup block. */ + +/* { dg-final { scan-tree-dump-times "hardcfr_check" 10 "hardcfr" } } */ +/* { dg-final { scan-tree-dump-times "builtin_trap" 2 "hardcfr" } } */ diff --git a/gcc/testsuite/g++.dg/torture/harden-cfr-throw.C b/gcc/testsuite/g++.dg/torture/harden-cfr-throw.C index 2dbc67c34d9..992fbdad381 100644 --- a/gcc/testsuite/g++.dg/torture/harden-cfr-throw.C +++ b/gcc/testsuite/g++.dg/torture/harden-cfr-throw.C @@ -1,5 +1,5 @@ /* { dg-do compile } */ -/* { dg-options "-fharden-control-flow-redundancy -fdump-tree-hardcfr -ffat-lto-objects" } */ +/* { dg-options "-fharden-control-flow-redundancy -fno-hardcfr-check-returning-calls -fdump-tree-hardcfr -ffat-lto-objects" } */ /* Check that we insert cleanups for checking around the bodies of maybe-throwing functions. */ @@ -57,8 +57,8 @@ void h3(void) { } void h4(void) { - /* Inline check before the __cxa_throw noreturn call. */ throw 1; + /* Inline check in the cleanup around the __cxa_throw noreturn call. */ } /* { dg-final { scan-tree-dump-times "hardcfr_check" 12 "hardcfr" } } */ diff --git a/gcc/testsuite/gcc.dg/torture/harden-cfr-tail-ub.c b/gcc/testsuite/gcc.dg/torture/harden-cfr-tail-ub.c new file mode 100644 index 00000000000..634d98f1ffc --- /dev/null +++ b/gcc/testsuite/gcc.dg/torture/harden-cfr-tail-ub.c @@ -0,0 +1,40 @@ +/* { dg-do compile } */ +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-returning-calls -fno-hardcfr-check-exceptions -fdump-tree-hardcfr -ffat-lto-objects -Wno-return-type" } */ + +/* In C only, check some additional cases (comparing with + c-c++-common/torture/harden-cfr-tail.c) of falling off the end of non-void + function. C++ would issue an unreachable call in these cases. */ + +extern int g (int i); + +int f1(int i) { + /* Inline check before the returning call, that doesn't return anything. */ + g (i); + /* Implicit return without value, despite the return type; this combination + enables tail-calling of g, and is recognized as a returning call. */ +} + +extern void g2 (int i); + +int f2(int i) { + /* Inline check before the returning call, that disregards its return + value. */ + g2 (i); + /* Implicit return without value, despite the return type; this combination + enables tail-calling of g2, and is recognized as a returning call. */ +} + +int f3(int i) { + if (i) + /* Out-of-line check before the returning call. */ + return g (i); + /* Out-of-line check before implicit return. */ +} + +/* Out-of-line checks in f3, before returning calls and before return. */ +/* { dg-final { scan-tree-dump-times "hardcfr_check" 2 "hardcfr" } } */ +/* Inline checking in all other functions. */ +/* { dg-final { scan-tree-dump-times "__builtin_trap" 2 "hardcfr" } } */ +/* Check before tail-call in all functions, but f3 is out-of-line. */ +/* { dg-final { scan-tree-dump-times "Inserting inline check before stmt" 2 "hardcfr" } } */ +/* { dg-final { scan-tree-dump-times "Inserting out-of-line check before stmt" 1 "hardcfr" } } */