From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com [IPv6:2607:f8b0:4864:20::434]) by sourceware.org (Postfix) with ESMTPS id D13393858D35; Fri, 20 Oct 2023 06:34:11 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org D13393858D35 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org D13393858D35 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=2607:f8b0:4864:20::434 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1697783658; cv=none; b=oI5G8eaveHtPxzZPaF7TBozP8Df2of2NF7MK4ap1YjomoKE+HXc1xjcy19eefB5+7Py8DhH6APjxZPoQ9XDAbJY62RoEoaQUdmFN11ZcrFrZhondrLC+usGeJME/Co568VGhLXzfqNEaRA40uPkuO3ykYMnDOa7UXqn3a1XQnqQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1697783658; c=relaxed/simple; bh=D8ea3xVkMa96BCdCutZWWVAlElpT/pLtSLe+nwAdnNY=; h=DKIM-Signature:MIME-Version:From:Date:Message-ID:Subject:To; b=GZCKLhQGAEjHiqQHH6QhETSJ+Ae50mV/KvvD22+KTVfOKFqPLePwZuNitItSywo4MoospseDadUOk7WtOVwCS3KGJ0cvh88zxVc6pLuC2WyIXttsNecFbCZUyhkiuKOA6JlvutT2v33HckSzfJSoZzT5npnwjcgtzFyrGQuoDr4= ARC-Authentication-Results: i=1; server2.sourceware.org Received: by mail-pf1-x434.google.com with SMTP id d2e1a72fcca58-6bb4abb8100so464810b3a.2; Thu, 19 Oct 2023 23:34:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1697783650; x=1698388450; darn=gcc.gnu.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=/nSuJKSQKYrbiHgEF4FCNB+unInmzo27lyQ30Ja5yng=; b=KmDt1N1BkCLqwQ6IEb7Pae0c8DozSpZ4Cj0JI5HDotGh0EnQV9wwfQ7tdG1+zRJtf9 OUrypuQSSjgtiGiHrcX6Z4AWjyF8i9pQGp5ThCQzwPDkIGZIomsGvILz+GVuOhz/1aVI 4Zrc+DXvy2JUOevPJTkQuVn0km4bsXBWPHhyYjLAxgTCTZ7OFzfTl66GPLtzJczX0T4L OT1ZS6TnLveF5jv0m4zrx0p44N19Jotsb2f78PGrWwkxC1F+PMJW9pqhdHMV4k389apl SX3rU5as53sJlpfRXZUO1UOQ19hawC1ztgbe2NUk/zlqKhWsucUyhd6moTt9FDQrj4Ow 3nsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1697783650; x=1698388450; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/nSuJKSQKYrbiHgEF4FCNB+unInmzo27lyQ30Ja5yng=; b=IEAWhv9P4x4YVZc6qqrdWSY2rQ8HKhr44UCRC0EAhu5FwOiqfB+r2vZoxsMP73TH7Z fmkhPqf/A+b1vMYvbnKvnSiJZiIU4bSRijgAHh7VBFpBl7HdV5LW5DzNFlc1fvdsJDW/ hD7vt3tkPxW6ZsMoHm7H/4NY/9RBTb3p4cCBVYE4qokdCJwWTC43MikeYE6acYnY3eac vyLYIMlzWfSIsDCm3R+/U8fy2DImCj9VYCo7JDq25coDnfhnYzNUcwBs3ktl+yfaQZoh 13fMoUflH7B71s2orBYATHBU4rgfesea1gQI5GWJo+okBBb5kYiV9eD883iiQbsxQ4NE LVYA== X-Gm-Message-State: AOJu0YyXDJMQmYPHCZKjOrNPovepeyl97M69q2nRT2LewqXY6+oy27xR 2tcPklE6Gt/t/oJVofFyyZb4gsRDY6oACgXoFb0f/3hI X-Google-Smtp-Source: AGHT+IFy/zYkrb/XfC/r0Xq+kGqJ7062+5WdMW6ehHJgTQwMpWza0tg7UW+L4/8ZTu9Vxzf1AhkXFfNQYn+eG60KU6Y= X-Received: by 2002:a05:6a21:3394:b0:173:f8c9:a776 with SMTP id yy20-20020a056a21339400b00173f8c9a776mr985695pzb.26.1697783649149; Thu, 19 Oct 2023 23:34:09 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Richard Biener Date: Fri, 20 Oct 2023 08:33:53 +0200 Message-ID: Subject: Re: [PATCH v3] Control flow redundancy hardening To: Alexandre Oliva Cc: gcc-patches@gcc.gnu.org, Jeff Law , David Edelsohn , Segher Boessenkool , Kewen Lin Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-8.0 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,GIT_PATCH_0,KAM_SHORT,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: On Fri, Oct 20, 2023 at 7:21=E2=80=AFAM Alexandre Oliva = wrote: > > Here's a refreshed and improved version that, compared with v2 posted in > June, adds initializers for probabilities and execution counts of EH > edges and blocks, fixes a bug when encountering abnormal gotos, drops an > inconsistent and undesirable use of BITS_BIG_ENDIAN in the runtime > bitmaps, adds tests for excess visited bits, fixes a typo in a runtime > test for __CHAR_BIT__ between 14 and 28, and avoids a runtime error when > verbose failures are enabled and the block count is a multiple of the > word size in bits. > https://gcc.gnu.org/pipermail/gcc-patches/2023-June/623232.html > > Regstrapped on x86_64-linux-gnu and ppc64le-linux-gnu. Also tested on > various other targets with gcc-12, and bootstrapped with the feature > enabled for functions with up to 32 blocks. Ok to install? OK. Thanks, Richard. > ---- > > On ppc64le, bootstrap compare fails because of scheduling differences in > e.g. libiberty/filename_cmp.c, with and without -fPIC. The latent > scheduler problem is apparent with -g -O2 -fPIC -fcompare-debug > -fharden-control-flow-redundancy on gcc135 in the compile farm. With > -fdump-noaddr -fdump-unnumbered-links --param > min-nondebug-insn-uid=3D10000 -fdump-rtl-sched1 -fsched-verbose=3D9, the > following diff between .gk (-g0) and non-.gk causes insn 10041, head of > bb7, to be scheduled differently. ISTM the asm stmts that the pass > introduces to ensure the bitmap operations aren't deferred or combined > affect subsequent insns differently, especially at BB boundaries. The > last nondebug insn in bb 6 is such an asm insn. Does this ring any > bells? > > -;; | 5 10041 | 18 +3 | GENERAL_REGS:[1 base cost 0] AL= TIVEC_REGS:[0 base cost 0] VSX_REGS:[0 base cost 0] CR_REGS:[0 base cost 0]= SPECIAL_REGS:[0 base cost 0] > +;; | 5 10041 | 18 +2 | GENERAL_REGS:[1 base cost 0] AL= TIVEC_REGS:[0 base cost 0] VSX_REGS:[0 base cost 0] CR_REGS:[0 base cost 0]= SPECIAL_REGS:[0 base cost 0] > > ---- > > This patch introduces an optional hardening pass to catch unexpected > execution flows. Functions are transformed so that basic blocks set a > bit in an automatic array, and (non-exceptional) function exit edges > check that the bits in the array represent an expected execution path > in the CFG. > > Functions with multiple exit edges, or with too many blocks, call an > out-of-line checker builtin implemented in libgcc. For simpler > functions, the verification is performed in-line. > > -fharden-control-flow-redundancy enables the pass for eligible > functions, --param hardcfr-max-blocks sets a block count limit for > functions to be eligible, and --param hardcfr-max-inline-blocks > tunes the "too many blocks" limit for in-line verification. > -fhardcfr-skip-leaf makes leaf functions non-eligible. > > Additional -fhardcfr-check-* options are added to enable checking at > exception escape points, before potential sibcalls, hereby dubbed > returning calls, and before noreturn calls and exception raises. A > notable case is the distinction between noreturn calls expected to > throw and those expected to terminate or loop forever: the default > setting for -fhardcfr-check-noreturn-calls, no-xthrow, performs > checking before the latter, but the former only gets checking in the > exception handler. GCC can only tell between them by explicit marking > noreturn functions expected to raise with the newly-introduced > expected_throw attribute, and corresponding ECF_XTHROW flag. > > > for gcc/ChangeLog > > * tree-core.h (ECF_XTHROW): New macro. > * tree.cc (set_call_expr): Add expected_throw attribute when > ECF_XTHROW is set. > (build_common_builtin_node): Add ECF_XTHROW to > __cxa_end_cleanup and _Unwind_Resume or _Unwind_SjLj_Resume. > * calls.cc (flags_from_decl_or_type): Check for expected_throw > attribute to set ECF_XTHROW. > * gimple.cc (gimple_build_call_from_tree): Propagate > ECF_XTHROW from decl flags to gimple call... > (gimple_call_flags): ... and back. > * gimple.h (GF_CALL_XTHROW): New gf_mask flag. > (gimple_call_set_expected_throw): New. > (gimple_call_expected_throw_p): New. > * Makefile.in (OBJS): Add gimple-harden-control-flow.o. > * builtins.def (BUILT_IN___HARDCFR_CHECK): New. > * common.opt (fharden-control-flow-redundancy): New. > (-fhardcfr-check-returning-calls): New. > (-fhardcfr-check-exceptions): New. > (-fhardcfr-check-noreturn-calls=3D*): New. > (Enum hardcfr_check_noreturn_calls): New. > (fhardcfr-skip-leaf): New. > * doc/invoke.texi: Document them. > (hardcfr-max-blocks, hardcfr-max-inline-blocks): New params. > * flag-types.h (enum hardcfr_noret): New. > * gimple-harden-control-flow.cc: New. > * params.opt (-param=3Dhardcfr-max-blocks=3D): New. > (-param=3Dhradcfr-max-inline-blocks=3D): New. > * passes.def (pass_harden_control_flow_redundancy): Add. > * tree-pass.h (make_pass_harden_control_flow_redundancy): > Declare. > * doc/extend.texi: Document expected_throw attribute. > > for gcc/ada/ChangeLog > > * gcc-interface/trans.cc (gigi): Mark __gnat_reraise_zcx with > ECF_XTHROW. > (build_raise_check): Likewise for all rcheck subprograms. > * gcc-interface/utils.cc (handle_expected_throw_attribute): > New. > (gnat_internal_attribute_table): Add expected_throw. > * libgnat/a-except.ads (Raise_Exception): Mark expected_throw. > (Reraise_Occurrence): Likewise. > (Raise_Exception_Always): Likewise. > (Raise_From_Controlled_Operation): Likewise. > (Reraise_Occurrence_Always): Likewise. > (Reraise_Occurrence_No_Defer): Likewise. > * libgnat/a-except.adb > (Exception_Propagation.Propagate_Exception): Likewise. > (Complete_And_Propagate_Occurrence): Likewise. > (Raise_Exception_No_Defer): Likewise. > (Raise_From_Signal_Handler): Likewise. > (Raise_With_Msg): Likewise. > (Raise_With_Location_And_Msg): Likewise. > (Raise_Constraint_Error): Likewise. > (Raise_Constraint_Error_Msg): Likewise. > (Raise_Program_Error): Likewise. > (Raise_Program_Error_Msg): Likewise. > (Raise_Storage_Error): Likewise. > (Raise_Storage_Error_Msg): Likewise. > (Reraise, Rcheck_*): Likewise. > * doc/gnat_rm/security_hardening_features.rst (Control Flow > Redundancy): Add -fhardcfr-check-noreturn-calls=3Dno-xthrow. > Note the influence of expected_throw. Document > -fhardcfr-skip-leaf. > > for gcc/c-family/ChangeLog > > * c-attribs.cc (handle_expected_throw_attribute): New. > (c_common_attribute_table): Add expected_throw. > > for gcc/cp/ChangeLog > > * decl.cc (push_throw_library_fn): Mark with ECF_XTHROW. > * except.cc (build_throw): Likewise __cxa_throw, > _ITM_cxa_throw, __cxa_rethrow. > > for gcc/testsuite/ChangeLog > > * c-c++-common/torture/harden-cfr.c: New. > * c-c++-common/torture/harden-abrt.c: New. > * c-c++-common/torture/harden-bref.c: New. > * c-c++-common/torture/harden-tail.c: New. > * c-c++-common/harden-cfr-noret-never-O0.c: New. > * c-c++-common/torture/harden-cfr-noret-never.c: New. > * c-c++-common/torture/harden-cfr-noret-noexcept.c: New. > * c-c++-common/torture/harden-cfr-noret-nothrow.c: New. > * c-c++-common/torture/harden-cfr-noret.c: New. > * c-c++-common/torture/harden-cfr-notail.c: New. > * c-c++-common/torture/harden-cfr-returning.c: New. > * c-c++-common/torture/harden-cfr-tail.c: Extend. > * c-c++-common/torture/harden-cfr-abrt-always.c: New. > * c-c++-common/torture/harden-cfr-abrt-never.c: New. > * c-c++-common/torture/harden-cfr-abrt-no-xthrow.c: New. > * c-c++-common/torture/harden-cfr-abrt-nothrow.c: New. > * c-c++-common/torture/harden-cfr-abrt.c: Extend. > * c-c++-common/torture/harden-cfr-always.c: New. > * c-c++-common/torture/harden-cfr-never.c: New. > * c-c++-common/torture/harden-cfr-no-xthrow.c: New. > * c-c++-common/torture/harden-cfr-nothrow.c: New. > * c-c++-common/torture/harden-cfr-bret-always.c: New. > * c-c++-common/torture/harden-cfr-bret-never.c: New. > * c-c++-common/torture/harden-cfr-bret-noopt.c: New. > * c-c++-common/torture/harden-cfr-bret-noret.c: New. > * c-c++-common/torture/harden-cfr-bret-no-xthrow.c: New. > * c-c++-common/torture/harden-cfr-bret-nothrow.c: New. > * c-c++-common/torture/harden-cfr-bret-retcl.c: New. > * c-c++-common/torture/harden-cfr-bret.c (g): New. > * g++.dg/harden-cfr-throw-always-O0.C: New. > * g++.dg/harden-cfr-throw-returning-O0.C: New. > * g++.dg/torture/harden-cfr-noret-always-no-nothrow.C: New. > * g++.dg/torture/harden-cfr-noret-never-no-nothrow.C: New. > * g++.dg/torture/harden-cfr-noret-no-nothrow.C: New. > * g++.dg/torture/harden-cfr-throw-always.C: New. > * g++.dg/torture/harden-cfr-throw-never.C: New. > * g++.dg/torture/harden-cfr-throw-no-xthrow.C: New. > * g++.dg/torture/harden-cfr-throw-no-xthrow-expected.C: New. > * g++.dg/torture/harden-cfr-throw-nothrow.C: New. > * g++.dg/torture/harden-cfr-throw-nocleanup.C: New. > * g++.dg/torture/harden-cfr-throw-returning.C: New. > * g++.dg/torture/harden-cfr-throw.C: New. > * gcc.dg/torture/harden-cfr-noret-no-nothrow.c: New. > * gcc.dg/torture/harden-cfr-tail-ub.c: New. > * gnat.dg/hardcfr.adb: New. > > for libgcc/ChangeLog > > * Makefile.in (LIB2ADD): Add hardcfr.c. > * hardcfr.c: New. > --- > gcc/Makefile.in | 1 > gcc/ada/gcc-interface/trans.cc | 2 > gcc/builtins.def | 3 > gcc/c-family/c-attribs.cc | 22 > gcc/calls.cc | 3 > gcc/common.opt | 35 > gcc/cp/decl.cc | 3 > gcc/cp/except.cc | 8 > gcc/doc/extend.texi | 11 > gcc/doc/invoke.texi | 93 + > gcc/flag-types.h | 10 > gcc/gimple-harden-control-flow.cc | 1488 ++++++++++++++= ++++++ > gcc/gimple.cc | 6 > gcc/gimple.h | 23 > gcc/params.opt | 8 > gcc/passes.def | 1 > .../c-c++-common/harden-cfr-noret-never-O0.c | 12 > .../c-c++-common/torture/harden-cfr-abrt-always.c | 11 > .../c-c++-common/torture/harden-cfr-abrt-never.c | 11 > .../torture/harden-cfr-abrt-no-xthrow.c | 11 > .../c-c++-common/torture/harden-cfr-abrt-nothrow.c | 11 > .../c-c++-common/torture/harden-cfr-abrt.c | 19 > .../c-c++-common/torture/harden-cfr-always.c | 13 > .../c-c++-common/torture/harden-cfr-bret-always.c | 13 > .../c-c++-common/torture/harden-cfr-bret-never.c | 13 > .../torture/harden-cfr-bret-no-xthrow.c | 14 > .../c-c++-common/torture/harden-cfr-bret-noopt.c | 12 > .../c-c++-common/torture/harden-cfr-bret-noret.c | 12 > .../c-c++-common/torture/harden-cfr-bret-nothrow.c | 13 > .../c-c++-common/torture/harden-cfr-bret-retcl.c | 12 > .../c-c++-common/torture/harden-cfr-bret.c | 17 > .../c-c++-common/torture/harden-cfr-never.c | 13 > .../c-c++-common/torture/harden-cfr-no-xthrow.c | 13 > .../c-c++-common/torture/harden-cfr-noret-never.c | 18 > .../torture/harden-cfr-noret-noexcept.c | 16 > .../torture/harden-cfr-noret-nothrow.c | 13 > .../c-c++-common/torture/harden-cfr-noret.c | 38 + > .../c-c++-common/torture/harden-cfr-notail.c | 8 > .../c-c++-common/torture/harden-cfr-nothrow.c | 13 > .../c-c++-common/torture/harden-cfr-returning.c | 35 > .../c-c++-common/torture/harden-cfr-skip-leaf.c | 10 > .../c-c++-common/torture/harden-cfr-tail.c | 52 + > gcc/testsuite/c-c++-common/torture/harden-cfr.c | 84 + > gcc/testsuite/g++.dg/harden-cfr-throw-always-O0.C | 13 > .../g++.dg/harden-cfr-throw-returning-O0.C | 12 > .../g++.dg/harden-cfr-throw-returning-enabled-O0.C | 11 > .../torture/harden-cfr-noret-always-no-nothrow.C | 16 > .../torture/harden-cfr-noret-never-no-nothrow.C | 18 > .../g++.dg/torture/harden-cfr-noret-no-nothrow.C | 23 > .../g++.dg/torture/harden-cfr-throw-always.C | 13 > .../g++.dg/torture/harden-cfr-throw-never.C | 12 > .../torture/harden-cfr-throw-no-xthrow-expected.C | 16 > .../g++.dg/torture/harden-cfr-throw-no-xthrow.C | 12 > .../g++.dg/torture/harden-cfr-throw-nocleanup.C | 11 > .../g++.dg/torture/harden-cfr-throw-nothrow.C | 11 > .../g++.dg/torture/harden-cfr-throw-returning.C | 31 > gcc/testsuite/g++.dg/torture/harden-cfr-throw.C | 73 + > .../gcc.dg/torture/harden-cfr-noret-no-nothrow.c | 15 > gcc/testsuite/gcc.dg/torture/harden-cfr-tail-ub.c | 40 + > gcc/testsuite/gnat.dg/hardcfr.adb | 76 + > gcc/tree-core.h | 3 > gcc/tree-pass.h | 2 > gcc/tree.cc | 9 > libgcc/Makefile.in | 3 > libgcc/hardcfr.c | 300 ++++ > 65 files changed, 2938 insertions(+), 6 deletions(-) > create mode 100644 gcc/gimple-harden-control-flow.cc > create mode 100644 gcc/testsuite/c-c++-common/harden-cfr-noret-never-O0.= c > create mode 100644 gcc/testsuite/c-c++-common/torture/harden-cfr-abrt-al= ways.c > create mode 100644 gcc/testsuite/c-c++-common/torture/harden-cfr-abrt-ne= ver.c > create mode 100644 gcc/testsuite/c-c++-common/torture/harden-cfr-abrt-no= -xthrow.c > create mode 100644 gcc/testsuite/c-c++-common/torture/harden-cfr-abrt-no= throw.c > create mode 100644 gcc/testsuite/c-c++-common/torture/harden-cfr-abrt.c > create mode 100644 gcc/testsuite/c-c++-common/torture/harden-cfr-always.= c > create mode 100644 gcc/testsuite/c-c++-common/torture/harden-cfr-bret-al= ways.c > create mode 100644 gcc/testsuite/c-c++-common/torture/harden-cfr-bret-ne= ver.c > create mode 100644 gcc/testsuite/c-c++-common/torture/harden-cfr-bret-no= -xthrow.c > create mode 100644 gcc/testsuite/c-c++-common/torture/harden-cfr-bret-no= opt.c > create mode 100644 gcc/testsuite/c-c++-common/torture/harden-cfr-bret-no= ret.c > create mode 100644 gcc/testsuite/c-c++-common/torture/harden-cfr-bret-no= throw.c > create mode 100644 gcc/testsuite/c-c++-common/torture/harden-cfr-bret-re= tcl.c > create mode 100644 gcc/testsuite/c-c++-common/torture/harden-cfr-bret.c > create mode 100644 gcc/testsuite/c-c++-common/torture/harden-cfr-never.c > create mode 100644 gcc/testsuite/c-c++-common/torture/harden-cfr-no-xthr= ow.c > create mode 100644 gcc/testsuite/c-c++-common/torture/harden-cfr-noret-n= ever.c > create mode 100644 gcc/testsuite/c-c++-common/torture/harden-cfr-noret-n= oexcept.c > create mode 100644 gcc/testsuite/c-c++-common/torture/harden-cfr-noret-n= othrow.c > create mode 100644 gcc/testsuite/c-c++-common/torture/harden-cfr-noret.c > create mode 100644 gcc/testsuite/c-c++-common/torture/harden-cfr-notail.= c > create mode 100644 gcc/testsuite/c-c++-common/torture/harden-cfr-nothrow= .c > create mode 100644 gcc/testsuite/c-c++-common/torture/harden-cfr-returni= ng.c > create mode 100644 gcc/testsuite/c-c++-common/torture/harden-cfr-skip-le= af.c > create mode 100644 gcc/testsuite/c-c++-common/torture/harden-cfr-tail.c > create mode 100644 gcc/testsuite/c-c++-common/torture/harden-cfr.c > create mode 100644 gcc/testsuite/g++.dg/harden-cfr-throw-always-O0.C > create mode 100644 gcc/testsuite/g++.dg/harden-cfr-throw-returning-O0.C > create mode 100644 gcc/testsuite/g++.dg/harden-cfr-throw-returning-enabl= ed-O0.C > create mode 100644 gcc/testsuite/g++.dg/torture/harden-cfr-noret-always-= no-nothrow.C > create mode 100644 gcc/testsuite/g++.dg/torture/harden-cfr-noret-never-n= o-nothrow.C > create mode 100644 gcc/testsuite/g++.dg/torture/harden-cfr-noret-no-noth= row.C > create mode 100644 gcc/testsuite/g++.dg/torture/harden-cfr-throw-always.= C > create mode 100644 gcc/testsuite/g++.dg/torture/harden-cfr-throw-never.C > create mode 100644 gcc/testsuite/g++.dg/torture/harden-cfr-throw-no-xthr= ow-expected.C > create mode 100644 gcc/testsuite/g++.dg/torture/harden-cfr-throw-no-xthr= ow.C > create mode 100644 gcc/testsuite/g++.dg/torture/harden-cfr-throw-noclean= up.C > create mode 100644 gcc/testsuite/g++.dg/torture/harden-cfr-throw-nothrow= .C > create mode 100644 gcc/testsuite/g++.dg/torture/harden-cfr-throw-returni= ng.C > create mode 100644 gcc/testsuite/g++.dg/torture/harden-cfr-throw.C > create mode 100644 gcc/testsuite/gcc.dg/torture/harden-cfr-noret-no-noth= row.c > create mode 100644 gcc/testsuite/gcc.dg/torture/harden-cfr-tail-ub.c > create mode 100644 gcc/testsuite/gnat.dg/hardcfr.adb > create mode 100644 libgcc/hardcfr.c > > diff --git a/gcc/Makefile.in b/gcc/Makefile.in > index 747f749538d0e..a25a1e32fbc5f 100644 > --- a/gcc/Makefile.in > +++ b/gcc/Makefile.in > @@ -1461,6 +1461,7 @@ OBJS =3D \ > gimple-iterator.o \ > gimple-fold.o \ > gimple-harden-conditionals.o \ > + gimple-harden-control-flow.o \ > gimple-laddress.o \ > gimple-loop-interchange.o \ > gimple-loop-jam.o \ > diff --git a/gcc/ada/gcc-interface/trans.cc b/gcc/ada/gcc-interface/trans= .cc > index e99fbb4eb5ed8..89f0a07c824e3 100644 > --- a/gcc/ada/gcc-interface/trans.cc > +++ b/gcc/ada/gcc-interface/trans.cc > @@ -519,6 +519,7 @@ gigi (Node_Id gnat_root, > ftype, NULL_TREE, > is_default, true, true, true, false, false, NU= LL, > Empty); > + set_call_expr_flags (reraise_zcx_decl, ECF_NORETURN | ECF_XTHROW); > > /* Dummy objects to materialize "others" and "all others" in the excep= tion > tables. These are exported by a-exexpr-gcc.adb, so see this unit f= or > @@ -721,6 +722,7 @@ build_raise_check (int check, enum exception_info_kin= d kind) > =3D create_subprog_decl (get_identifier (Name_Buffer), NULL_TREE, ft= ype, > NULL_TREE, is_default, true, true, true, false= , > false, NULL, Empty); > + set_call_expr_flags (result, ECF_NORETURN | ECF_XTHROW); > > return result; > } > diff --git a/gcc/builtins.def b/gcc/builtins.def > index 5953266acba96..eb6f4ec2034cd 100644 > --- a/gcc/builtins.def > +++ b/gcc/builtins.def > @@ -1179,6 +1179,9 @@ DEF_GCC_BUILTIN (BUILT_IN_FILE, "FILE", BT_FN_CONST= _STRING, ATTR_NOTHROW_LEAF_LI > DEF_GCC_BUILTIN (BUILT_IN_FUNCTION, "FUNCTION", BT_FN_CONST_STRING, ATTR= _NOTHROW_LEAF_LIST) > DEF_GCC_BUILTIN (BUILT_IN_LINE, "LINE", BT_FN_INT, ATTR_NOTHROW_LEAF_LIS= T) > > +/* Control Flow Redundancy hardening out-of-line checker. */ > +DEF_BUILTIN_STUB (BUILT_IN___HARDCFR_CHECK, "__builtin___hardcfr_check") > + > /* Synchronization Primitives. */ > #include "sync-builtins.def" > > diff --git a/gcc/c-family/c-attribs.cc b/gcc/c-family/c-attribs.cc > index dca7548b2c6af..abf44d5426e82 100644 > --- a/gcc/c-family/c-attribs.cc > +++ b/gcc/c-family/c-attribs.cc > @@ -136,6 +136,7 @@ static tree handle_vector_mask_attribute (tree *, tre= e, tree, int, > static tree handle_nonnull_attribute (tree *, tree, tree, int, bool *); > static tree handle_nonstring_attribute (tree *, tree, tree, int, bool *)= ; > static tree handle_nothrow_attribute (tree *, tree, tree, int, bool *); > +static tree handle_expected_throw_attribute (tree *, tree, tree, int, bo= ol *); > static tree handle_cleanup_attribute (tree *, tree, tree, int, bool *); > static tree handle_warn_unused_result_attribute (tree *, tree, tree, int= , > bool *); > @@ -437,6 +438,8 @@ const struct attribute_spec c_common_attribute_table[= ] =3D > handle_nonstring_attribute, NULL }, > { "nothrow", 0, 0, true, false, false, false, > handle_nothrow_attribute, NULL }, > + { "expected_throw", 0, 0, true, false, false, false, > + handle_expected_throw_attribute, NULL }, > { "may_alias", 0, 0, false, true, false, false, NULL, NULL= }, > { "cleanup", 1, 1, true, false, false, false, > handle_cleanup_attribute, NULL }, > @@ -5459,6 +5462,25 @@ handle_nothrow_attribute (tree *node, tree name, t= ree ARG_UNUSED (args), > return NULL_TREE; > } > > +/* Handle a "nothrow" attribute; arguments as in > + struct attribute_spec.handler. */ > + > +static tree > +handle_expected_throw_attribute (tree *node, tree name, tree ARG_UNUSED = (args), > + int ARG_UNUSED (flags), bool *no_add_att= rs) > +{ > + if (TREE_CODE (*node) =3D=3D FUNCTION_DECL) > + /* No flag to set here. */; > + /* ??? TODO: Support types. */ > + else > + { > + warning (OPT_Wattributes, "%qE attribute ignored", name); > + *no_add_attrs =3D true; > + } > + > + return NULL_TREE; > +} > + > /* Handle a "cleanup" attribute; arguments as in > struct attribute_spec.handler. */ > > diff --git a/gcc/calls.cc b/gcc/calls.cc > index e9e69517997e9..9edb5831611ec 100644 > --- a/gcc/calls.cc > +++ b/gcc/calls.cc > @@ -848,6 +848,9 @@ flags_from_decl_or_type (const_tree exp) > flags |=3D ECF_TM_PURE; > } > > + if (lookup_attribute ("expected_throw", DECL_ATTRIBUTES (exp))) > + flags |=3D ECF_XTHROW; > + > flags =3D special_function_p (exp, flags); > } > else if (TYPE_P (exp)) > diff --git a/gcc/common.opt b/gcc/common.opt > index b103b8d28edf8..ce34075561f9f 100644 > --- a/gcc/common.opt > +++ b/gcc/common.opt > @@ -1831,6 +1831,41 @@ fharden-conditional-branches > Common Var(flag_harden_conditional_branches) Optimization > Harden conditional branches by checking reversed conditions. > > +fharden-control-flow-redundancy > +Common Var(flag_harden_control_flow_redundancy) Optimization > +Harden control flow by recording and checking execution paths. > + > +fhardcfr-skip-leaf > +Common Var(flag_harden_control_flow_redundancy_skip_leaf) Optimization > +Disable CFR in leaf functions. > + > +fhardcfr-check-returning-calls > +Common Var(flag_harden_control_flow_redundancy_check_returning_calls) In= it(-1) Optimization > +Check CFR execution paths also before calls followed by returns of their= results. > + > +fhardcfr-check-exceptions > +Common Var(flag_harden_control_flow_redundancy_check_exceptions) Init(-1= ) Optimization > +Check CFR execution paths also when exiting a function through an except= ion. > + > +fhardcfr-check-noreturn-calls=3D > +Common Joined RejectNegative Enum(hardcfr_check_noreturn_calls) Var(flag= _harden_control_flow_redundancy_check_noreturn) Init(HCFRNR_UNSPECIFIED) Op= timization > +-fhardcfr-check-noreturn-calls=3D[always|no-xthrow|nothrow|never] = Check CFR execution paths also before calling noreturn functions. > + > +Enum > +Name(hardcfr_check_noreturn_calls) Type(enum hardcfr_noret) UnknownError= (unknown hardcfr noreturn checking level %qs) > + > +EnumValue > +Enum(hardcfr_check_noreturn_calls) String(never) Value(HCFRNR_NEVER) > + > +EnumValue > +Enum(hardcfr_check_noreturn_calls) String(nothrow) Value(HCFRNR_NOTHROW) > + > +EnumValue > +Enum(hardcfr_check_noreturn_calls) String(no-xthrow) Value(HCFRNR_NO_XTH= ROW) > + > +EnumValue > +Enum(hardcfr_check_noreturn_calls) String(always) Value(HCFRNR_ALWAYS) > + > ; Nonzero means ignore `#ident' directives. 0 means handle them. > ; Generate position-independent code for executables if possible > ; On SVR4 targets, it also controls whether or not to emit a > diff --git a/gcc/cp/decl.cc b/gcc/cp/decl.cc > index ce4c89dea7055..16af59de69627 100644 > --- a/gcc/cp/decl.cc > +++ b/gcc/cp/decl.cc > @@ -5281,7 +5281,8 @@ push_cp_library_fn (enum tree_code operator_code, t= ree type, > tree > push_throw_library_fn (tree name, tree type) > { > - tree fn =3D push_library_fn (name, type, NULL_TREE, ECF_NORETURN | ECF= _COLD); > + tree fn =3D push_library_fn (name, type, NULL_TREE, > + ECF_NORETURN | ECF_XTHROW | ECF_COLD); > return fn; > } > > diff --git a/gcc/cp/except.cc b/gcc/cp/except.cc > index 6c0f0815424c1..e32efb30457bf 100644 > --- a/gcc/cp/except.cc > +++ b/gcc/cp/except.cc > @@ -657,12 +657,13 @@ build_throw (location_t loc, tree exp) > tree args[3] =3D {ptr_type_node, ptr_type_node, cleanup_type}; > > throw_fn =3D declare_library_fn_1 ("__cxa_throw", > - ECF_NORETURN | ECF_COLD, > + ECF_NORETURN | ECF_XTHROW | EC= F_COLD, > void_type_node, 3, args); > if (flag_tm && throw_fn !=3D error_mark_node) > { > tree itm_fn =3D declare_library_fn_1 ("_ITM_cxa_throw", > - ECF_NORETURN | ECF_COLD= , > + ECF_NORETURN | ECF_XTHR= OW > + | ECF_COLD, > void_type_node, 3, args= ); > if (itm_fn !=3D error_mark_node) > { > @@ -797,7 +798,8 @@ build_throw (location_t loc, tree exp) > if (!rethrow_fn) > { > rethrow_fn =3D declare_library_fn_1 ("__cxa_rethrow", > - ECF_NORETURN | ECF_COLD, > + ECF_NORETURN | ECF_XTHROW > + | ECF_COLD, > void_type_node, 0, NULL); > if (flag_tm && rethrow_fn !=3D error_mark_node) > apply_tm_attr (rethrow_fn, get_identifier ("transaction_pure"= )); > diff --git a/gcc/doc/extend.texi b/gcc/doc/extend.texi > index 93f014a1f8abd..bf941e6b93a18 100644 > --- a/gcc/doc/extend.texi > +++ b/gcc/doc/extend.texi > @@ -3055,6 +3055,17 @@ when using these attributes the problem is diagnos= ed > earlier and with exact location of the call even in presence of inline > functions or when not emitting debugging information. > > +@cindex @code{expected_throw} function attribute > +@item expected_throw > +This attribute, attached to a function, tells the compiler the function > +is more likely to raise or propagate an exception than to return, loop > +forever, or terminate the program. > + > +This hint is mostly ignored by the compiler. The only effect is when > +it's applied to @code{noreturn} functions and > +@samp{-fharden-control-flow-redundancy} is enabled, and > +@samp{-fhardcfr-check-noreturn-calls=3Dnot-always} is not overridden. > + > @cindex @code{externally_visible} function attribute > @item externally_visible > This attribute, attached to a global variable or function, nullifies > diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi > index 16c458431236a..aebe9195ef0f2 100644 > --- a/gcc/doc/invoke.texi > +++ b/gcc/doc/invoke.texi > @@ -642,6 +642,9 @@ Objective-C and Objective-C++ Dialects}. > -fsanitize-undefined-trap-on-error -fbounds-check > -fcf-protection=3D@r{[}full@r{|}branch@r{|}return@r{|}none@r{|}check@r{]= } > -fharden-compares -fharden-conditional-branches > +-fharden-control-flow-redundancy -fhardcfr-skip-leaf > +-fhardcfr-check-exceptions -fhardcfr-check-returning-calls > +-fhardcfr-check-noreturn-calls=3D@r{[}always@r{|}no-xthrow@r{|}nothrow@r= {|}never@r{]} > -fstack-protector -fstack-protector-all -fstack-protector-strong > -fstack-protector-explicit -fstack-check > -fstack-limit-register=3D@var{reg} -fstack-limit-symbol=3D@var{sym} > @@ -15964,6 +15967,16 @@ A value of zero can be used to lift > the bound. A variable whose value is unknown at compilation time and > defined outside a SCoP is a parameter of the SCoP. > > +@item hardcfr-max-blocks > +Disable @option{-fharden-control-flow-redundancy} for functions with a > +larger number of blocks than the specified value. Zero removes any > +limit. > + > +@item hardcfr-max-inline-blocks > +Force @option{-fharden-control-flow-redundancy} to use out-of-line > +checking for functions with a larger number of basic blocks than the > +specified value. > + > @item loop-block-tile-size > Loop blocking or strip mining transforms, enabled with > @option{-floop-block} or @option{-floop-strip-mine}, strip mine each > @@ -17448,6 +17461,86 @@ condition, and to call @code{__builtin_trap} if = the result is > unexpected. Use with @samp{-fharden-compares} to cover all > conditionals. > > +@opindex fharden-control-flow-redundancy > +@item -fharden-control-flow-redundancy > +Emit extra code to set booleans when entering basic blocks, and to > +verify and trap, at function exits, when the booleans do not form an > +execution path that is compatible with the control flow graph. > + > +Verification takes place before returns, before mandatory tail calls > +(see below) and, optionally, before escaping exceptions with > +@option{-fhardcfr-check-exceptions}, before returning calls with > +@option{-fhardcfr-check-returning-calls}, and before noreturn calls with > +@option{-fhardcfr-check-noreturn-calls}). Tuning options > +@option{--param hardcfr-max-blocks} and @option{--param > +hardcfr-max-inline-blocks} are available. > + > +Tail call optimization takes place too late to affect control flow > +redundancy, but calls annotated as mandatory tail calls by language > +front-ends, and any calls marked early enough as potential tail calls > +would also have verification issued before the call, but these > +possibilities are merely theoretical, as these conditions can only be > +met when using custom compiler plugins. > + > +@opindex fhardcfr-skip-leaf > +@item -fhardcfr-skip-leaf > +Disable @option{-fharden-control-flow-redundancy} in leaf functions. > + > +@opindex fhardcfr-check-exceptions > +@opindex fno-hardcfr-check-exceptions > +@item -fhardcfr-check-exceptions > +When @option{-fharden-control-flow-redundancy} is active, check the > +recorded execution path against the control flow graph at exception > +escape points, as if the function body was wrapped with a cleanup > +handler that performed the check and reraised. This option is enabled > +by default; use @option{-fno-hardcfr-check-exceptions} to disable it. > + > +@opindex fhardcfr-check-returning-calls > +@opindex fno-hardcfr-check-returning-calls > +@item -fhardcfr-check-returning-calls > +When @option{-fharden-control-flow-redundancy} is active, check the > +recorded execution path against the control flow graph before any > +function call immediately followed by a return of its result, if any, so > +as to not prevent tail-call optimization, whether or not it is > +ultimately optimized to a tail call. > + > +This option is enabled by default whenever sibling call optimizations > +are enabled (see @option{-foptimize-sibling-calls}), but it can be > +enabled (or disabled, using its negated form) explicitly, regardless of > +the optimizations. > + > +@opindex fhardcfr-check-noreturn-calls > +@item -fhardcfr-check-noreturn-calls=3D@r{[}always@r{|}no-xthrow@r{|}not= hrow@r{|}never@r{]} > +When @option{-fharden-control-flow-redundancy} is active, check the > +recorded execution path against the control flow graph before > +@code{noreturn} calls, either all of them (@option{always}), those that > +aren't expected to return control to the caller through an exception > +(@option{no-xthrow}, the default), those that may not return control to > +the caller through an exception either (@option{nothrow}), or none of > +them (@option{never}). > + > +Checking before a @code{noreturn} function that may return control to > +the caller through an exception may cause checking to be performed more > +than once, if the exception is caught in the caller, whether by a > +handler or a cleanup. When @option{-fhardcfr-check-exceptions} is also > +enabled, the compiler will avoid associating a @code{noreturn} call with > +the implicitly-added cleanup handler, since it would be redundant with > +the check performed before the call, but other handlers or cleanups in > +the function, if activated, will modify the recorded execution path and > +check it again when another checkpoint is hit. The checkpoint may even > +be another @code{noreturn} call, so checking may end up performed > +multiple times. > + > +Various optimizers may cause calls to be marked as @code{noreturn} > +and/or @code{nothrow}, even in the absence of the corresponding > +attributes, which may affect the placement of checks before calls, as > +well as the addition of implicit cleanup handlers for them. This > +unpredictability, and the fact that raising and reraising exceptions > +frequently amounts to implicitly calling @code{noreturn} functions, have > +made @option{no-xthrow} the default setting for this option: it excludes > +from the @code{noreturn} treatment only internal functions used to > +(re)raise exceptions, that are not affected by these optimizations. > + > @opindex fstack-protector > @item -fstack-protector > Emit extra code to check for buffer overflows, such as stack smashing > diff --git a/gcc/flag-types.h b/gcc/flag-types.h > index 7466c1106f2ba..c1852cd810cc6 100644 > --- a/gcc/flag-types.h > +++ b/gcc/flag-types.h > @@ -157,6 +157,16 @@ enum stack_reuse_level > SR_ALL > }; > > +/* Control Flow Redundancy hardening options for noreturn calls. */ > +enum hardcfr_noret > +{ > + HCFRNR_NEVER, > + HCFRNR_NOTHROW, > + HCFRNR_NO_XTHROW, > + HCFRNR_UNSPECIFIED, > + HCFRNR_ALWAYS, > +}; > + > /* The live patching level. */ > enum live_patching_level > { > diff --git a/gcc/gimple-harden-control-flow.cc b/gcc/gimple-harden-contro= l-flow.cc > new file mode 100644 > index 0000000000000..5c28fd07f3329 > --- /dev/null > +++ b/gcc/gimple-harden-control-flow.cc > @@ -0,0 +1,1488 @@ > +/* Control flow redundancy hardening. > + Copyright (C) 2022 Free Software Foundation, Inc. > + Contributed by Alexandre Oliva . > + > +This file is part of GCC. > + > +GCC is free software; you can redistribute it and/or modify it under > +the terms of the GNU General Public License as published by the Free > +Software Foundation; either version 3, or (at your option) any later > +version. > + > +GCC is distributed in the hope that it will be useful, but WITHOUT ANY > +WARRANTY; without even the implied warranty of MERCHANTABILITY or > +FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License > +for more details. > + > +You should have received a copy of the GNU General Public License > +along with GCC; see the file COPYING3. If not see > +. */ > + > +#include "config.h" > +#define INCLUDE_ALGORITHM /* find */ > +#include "system.h" > +#include "coretypes.h" > +#include "backend.h" > +#include "tree.h" > +#include "fold-const.h" > +#include "gimple.h" > +#include "gimplify.h" > +#include "tree-pass.h" > +#include "ssa.h" > +#include "gimple-iterator.h" > +#include "gimple-pretty-print.h" > +#include "tree-cfg.h" > +#include "tree-cfgcleanup.h" > +#include "tree-eh.h" > +#include "except.h" > +#include "sbitmap.h" > +#include "basic-block.h" > +#include "cfghooks.h" > +#include "cfgloop.h" > +#include "cgraph.h" > +#include "alias.h" > +#include "varasm.h" > +#include "output.h" > +#include "langhooks.h" > +#include "diagnostic.h" > +#include "intl.h" > + > +namespace { > + > +/* This pass introduces verification, at function exits, that booleans > + set in each basic block during function execution reflect the > + control flow graph: for each visited block, check that at least one > + predecessor and at least one successor were also visited. This > + sort of hardening may detect various kinds of attacks. */ > + > +/* Define a pass to harden code through control flow redundancy. */ > + > +const pass_data pass_data_harden_control_flow_redundancy =3D { > + GIMPLE_PASS, > + "hardcfr", > + OPTGROUP_NONE, > + TV_NONE, > + PROP_cfg | PROP_ssa, // properties_required > + 0, // properties_provided > + 0, // properties_destroyed > + TODO_cleanup_cfg, // properties_start > + 0, // properties_finish > +}; > + > +class pass_harden_control_flow_redundancy : public gimple_opt_pass > +{ > +public: > + pass_harden_control_flow_redundancy (gcc::context *ctxt) > + : gimple_opt_pass (pass_data_harden_control_flow_redundancy, ctxt) > + {} > + opt_pass *clone () { return new pass_harden_control_flow_redundancy (m= _ctxt); } > + virtual bool gate (function *fun) { > + /* Return quickly if the pass is disabled, without checking any of > + the conditions that might give rise to warnings that would only > + be appropriate if hardening was requested. */ > + if (!flag_harden_control_flow_redundancy) > + return false; > + > + /* Functions that return more than once, like setjmp and vfork > + (that also gets this flag set), will start recording a path > + after the first return, and then may take another path when > + they return again. The unterminated path may then be flagged > + as an error. ??? We could save the visited array before the > + call and restore it if it returns again. */ > + if (fun->calls_setjmp) > + { > + warning_at (DECL_SOURCE_LOCATION (fun->decl), 0, > + "%qD calls % or similar," > + " %<-fharden-control-flow-redundancy%> is not support= ed", > + fun->decl); > + return false; > + } > + > + /* Some targets bypass the abnormal dispatcher block in nonlocal > + gotos, and then we'd miss its visited bit. It might be doable > + to make it work uniformly, but this feature is not used often > + enough to make it worthwhile. */ > + if (fun->has_nonlocal_label) > + { > + warning_at (DECL_SOURCE_LOCATION (fun->decl), 0, > + "%qD receives nonlocal gotos," > + " %<-fharden-control-flow-redundancy%> is not support= ed", > + fun->decl); > + return false; > + } > + > + if (fun->cfg && param_hardcfr_max_blocks > 0 > + && (n_basic_blocks_for_fn (fun) - NUM_FIXED_BLOCKS > + > param_hardcfr_max_blocks)) > + { > + warning_at (DECL_SOURCE_LOCATION (fun->decl), 0, > + "%qD has more than %u blocks, the requested" > + " maximum for %<-fharden-control-flow-redundancy%>", > + fun->decl, param_hardcfr_max_blocks); > + return false; > + } > + > + return true; > + } > + virtual unsigned int execute (function *); > +}; > + > +} > + > +/* Return TRUE iff CFR checks should be inserted before returning > + calls. */ > + > +static bool > +check_returning_calls_p () > +{ > + return > + flag_harden_control_flow_redundancy_check_returning_calls > 0 > + || (flag_harden_control_flow_redundancy_check_returning_calls < 0 > + /* Gates pass_tail_calls. */ > + && flag_optimize_sibling_calls > + /* Gates pass_all_optimizations. */ > + && optimize >=3D 1 && !optimize_debug); > +} > + > +/* Scan BB from the end, updating *RETPTR if given as return stmts and > + copies are found. Return a call or a stmt that cannot appear after > + a tail call, or NULL if the top of the block is reached without > + finding any. */ > + > +static gimple * > +hardcfr_scan_block (basic_block bb, tree **retptr) > +{ > + gimple_stmt_iterator gsi; > + for (gsi =3D gsi_last_bb (bb); !gsi_end_p (gsi); gsi_prev (&gsi)) > + { > + gimple *stmt =3D gsi_stmt (gsi); > + > + /* Ignore labels, returns, nops, clobbers and debug stmts. */ > + if (gimple_code (stmt) =3D=3D GIMPLE_LABEL > + || gimple_code (stmt) =3D=3D GIMPLE_NOP > + || gimple_code (stmt) =3D=3D GIMPLE_PREDICT > + || gimple_clobber_p (stmt) > + || is_gimple_debug (stmt)) > + continue; > + > + if (gimple_code (stmt) =3D=3D GIMPLE_RETURN) > + { > + greturn *gret =3D as_a (stmt); > + if (retptr) > + { > + gcc_checking_assert (!*retptr); > + *retptr =3D gimple_return_retval_ptr (gret); > + } > + continue; > + } > + > + /* Check for a call. */ > + if (is_gimple_call (stmt)) > + return stmt; > + > + /* Allow simple copies to the return value, updating the return > + value to be found in earlier assignments. */ > + if (retptr && *retptr && gimple_assign_single_p (stmt) > + && **retptr =3D=3D gimple_assign_lhs (stmt)) > + { > + *retptr =3D gimple_assign_rhs1_ptr (stmt); > + continue; > + } > + > + return stmt; > + } > + > + /* Any other kind of stmt will prevent a tail call. */ > + return NULL; > +} > + > +/* Return TRUE iff CALL is to be preceded by a CFR checkpoint, i.e., > + if it's a returning call (one whose result is ultimately returned > + without intervening non-copy statements) and we're checking > + returning calls, a __builtin_return call (noreturn with a path to > + the exit block), a must-tail call, or a tail call. */ > + > +static bool > +returning_call_p (gcall *call) > +{ > + if (!(gimple_call_noreturn_p (call) > + || gimple_call_must_tail_p (call) > + || gimple_call_tail_p (call) > + || check_returning_calls_p ())) > + return false; > + > + /* Quickly check that there's a path to exit compatible with a > + returning call. Detect infinite loops by limiting the path > + length to the basic block count, and by looking for duplicate > + blocks before allocating more memory for the path, for amortized > + O(n). */ > + auto_vec path; > + for (basic_block bb =3D gimple_bb (call); > + bb !=3D EXIT_BLOCK_PTR_FOR_FN (cfun); > + bb =3D single_succ (bb)) > + if (!single_succ_p (bb) > + || (single_succ_edge (bb)->flags & EDGE_EH) !=3D 0 > + || n_basic_blocks_for_fn (cfun) - path.length () <=3D NUM_FIXED_B= LOCKS > + || (path.length () =3D=3D path.allocated () > + && std::find (path.begin (), path.end (), bb) !=3D path.end (= ))) > + return false; > + else > + path.safe_push (bb); > + > + /* Check the stmts in the blocks and trace the return value. */ > + tree *retptr =3D NULL; > + for (;;) > + { > + gcc_checking_assert (!path.is_empty ()); > + basic_block bb =3D path.pop (); > + gimple *stop =3D hardcfr_scan_block (bb, &retptr); > + if (stop) > + { > + if (stop !=3D call) > + return false; > + gcc_checking_assert (path.is_empty ()); > + break; > + } > + > + gphi *retphi =3D NULL; > + if (retptr && *retptr && TREE_CODE (*retptr) =3D=3D SSA_NAME > + && !SSA_NAME_IS_DEFAULT_DEF (*retptr) > + && SSA_NAME_DEF_STMT (*retptr) > + && is_a (SSA_NAME_DEF_STMT (*retptr)) > + && gimple_bb (SSA_NAME_DEF_STMT (*retptr)) =3D=3D bb) > + { > + retphi =3D as_a (SSA_NAME_DEF_STMT (*retptr)); > + gcc_checking_assert (gimple_phi_result (retphi) =3D=3D *retptr)= ; > + } > + else > + continue; > + > + gcc_checking_assert (!path.is_empty ()); > + edge e =3D single_succ_edge (path.last ()); > + int i =3D EDGE_COUNT (bb->preds); > + while (i--) > + if (EDGE_PRED (bb, i) =3D=3D e) > + break; > + gcc_checking_assert (i >=3D 0); > + retptr =3D gimple_phi_arg_def_ptr (retphi, i); > + } > + > + return (gimple_call_noreturn_p (call) > + || gimple_call_must_tail_p (call) > + || gimple_call_tail_p (call) > + || (gimple_call_lhs (call) =3D=3D (retptr ? *retptr : NULL) > + && check_returning_calls_p ())); > +} > + > +typedef auto_vec chk_edges_t; > + > +/* Declare for mutual recursion. */ > +static bool hardcfr_sibcall_search_preds (basic_block bb, > + chk_edges_t &chk_edges, > + int &count_chkcall, > + auto_sbitmap &chkcall_blocks, > + int &count_postchk, > + auto_sbitmap &postchk_blocks, > + tree *retptr); > + > +/* Search backwards from the end of BB for a mandatory or potential > + sibcall. Schedule the block to be handled sort-of like noreturn if > + so. Recurse to preds, with updated RETPTR, if the block only > + contains stmts that may follow such a call, scheduling checking at > + edges and marking blocks as post-check as needed. Return true iff, > + at the end of the block, a check will have already been > + performed. */ > + > +static bool > +hardcfr_sibcall_search_block (basic_block bb, > + chk_edges_t &chk_edges, > + int &count_chkcall, > + auto_sbitmap &chkcall_blocks, > + int &count_postchk, > + auto_sbitmap &postchk_blocks, > + tree *retptr) > +{ > + /* Conditionals and internal exceptions rule out tail calls. */ > + if (!single_succ_p (bb) > + || (single_succ_edge (bb)->flags & EDGE_EH) !=3D 0) > + return false; > + > + gimple *stmt =3D hardcfr_scan_block (bb, &retptr); > + if (!stmt) > + return hardcfr_sibcall_search_preds (bb, chk_edges, > + count_chkcall, chkcall_blocks, > + count_postchk, postchk_blocks, > + retptr); > + > + if (!is_a (stmt)) > + return false; > + > + /* Avoid disrupting mandatory or early-marked tail calls, > + inserting the check before them. This works for > + must-tail calls, but tail calling as an optimization is > + detected too late for us. > + > + Also check for noreturn calls here. Noreturn calls won't > + normally have edges to exit, so they won't be found here, > + but __builtin_return does, and we must check before > + it, so handle it like a tail call. */ > + gcall *call =3D as_a (stmt); > + if (!(gimple_call_noreturn_p (call) > + || gimple_call_must_tail_p (call) > + || gimple_call_tail_p (call) > + || (gimple_call_lhs (call) =3D=3D (retptr ? *retptr : NULL) > + && check_returning_calls_p ()))) > + return false; > + > + gcc_checking_assert (returning_call_p (call)); > + > + /* We found a call that is to be preceded by checking. */ > + if (bitmap_set_bit (chkcall_blocks, bb->index)) > + ++count_chkcall; > + else > + gcc_unreachable (); > + return true; > +} > + > + > +/* Search preds of BB for a mandatory or potential sibcall or > + returning call, and arrange for the blocks containing them to have > + a check inserted before the call, like noreturn calls. If any > + preds are found to perform checking, schedule checks at the edges > + of those that don't, and mark BB as postcheck.. */ > + > +static bool > +hardcfr_sibcall_search_preds (basic_block bb, > + chk_edges_t &chk_edges, > + int &count_chkcall, > + auto_sbitmap &chkcall_blocks, > + int &count_postchk, > + auto_sbitmap &postchk_blocks, > + tree *retptr) > +{ > + /* For the exit block, we wish to force a check at every > + predecessor, so pretend we've already found a pred that had > + checking, so that we schedule checking at every one of its pred > + edges. */ > + bool first =3D bb->index >=3D NUM_FIXED_BLOCKS; > + bool postchecked =3D true; > + > + gphi *retphi =3D NULL; > + if (retptr && *retptr && TREE_CODE (*retptr) =3D=3D SSA_NAME > + && !SSA_NAME_IS_DEFAULT_DEF (*retptr) > + && SSA_NAME_DEF_STMT (*retptr) > + && is_a (SSA_NAME_DEF_STMT (*retptr)) > + && gimple_bb (SSA_NAME_DEF_STMT (*retptr)) =3D=3D bb) > + { > + retphi =3D as_a (SSA_NAME_DEF_STMT (*retptr)); > + gcc_checking_assert (gimple_phi_result (retphi) =3D=3D *retptr); > + } > + > + for (int i =3D EDGE_COUNT (bb->preds); i--; first =3D false) > + { > + edge e =3D EDGE_PRED (bb, i); > + > + bool checked > + =3D hardcfr_sibcall_search_block (e->src, chk_edges, > + count_chkcall, chkcall_blocks, > + count_postchk, postchk_blocks, > + !retphi ? retptr > + : gimple_phi_arg_def_ptr (retphi,= i)); > + > + if (first) > + { > + postchecked =3D checked; > + continue; > + } > + > + /* When we first find a checked block, force a check at every > + other incoming edge we've already visited, and those we > + visit afterwards that don't have their own check, so that > + when we reach BB, the check has already been performed. */ > + if (!postchecked && checked) > + { > + for (int j =3D EDGE_COUNT (bb->preds); --j > i; ) > + chk_edges.safe_push (EDGE_PRED (bb, j)); > + postchecked =3D true; > + } > + if (postchecked && !checked) > + chk_edges.safe_push (EDGE_PRED (bb, i)); > + } > + > + if (postchecked && bb->index >=3D NUM_FIXED_BLOCKS) > + { > + if (bitmap_set_bit (postchk_blocks, bb->index)) > + count_postchk++; > + else > + gcc_unreachable (); > + } > + > + return postchecked; > +} > + > + > +class rt_bb_visited > +{ > + /* Use a sufficiently wide unsigned type to hold basic block numbers. = */ > + typedef size_t blknum; > + > + /* Record the original block count of the function. */ > + blknum nblocks; > + /* Record the number of bits per VWORD (short for VISITED WORD), an > + efficient mode to set and test bits for blocks we visited, and to > + encode the CFG in case out-of-line verification is used. */ > + unsigned vword_bits; > + > + /* Hold the unsigned integral VWORD type. */ > + tree vword_type; > + /* Hold a pointer-to-VWORD type. */ > + tree vword_ptr; > + > + /* Hold a growing sequence used to check, inline or out-of-line, > + that VISITED encodes an expected execution path. */ > + gimple_seq ckseq; > + /* If nonNULL, hold a growing representation of the CFG for > + out-of-line testing. */ > + tree rtcfg; > + > + /* Hold the declaration of an array of VWORDs, used as an array of > + NBLOCKS-2 bits. */ > + tree visited; > + > + /* If performing inline checking, hold a declarations of boolean > + variables used for inline checking. CKBLK holds the result of > + testing whether the VISITED bit corresponding to a predecessor or > + successor is set, CKINV inverts that bit, CKPART gets cleared if > + a block was not visited or if CKINV for any of its predecessors > + or successors is set, and CKFAIL gets set if CKPART remains set > + at the end of a block's predecessors or successors list. */ > + tree ckfail, ckpart, ckinv, ckblk; > + > + /* Convert a block index N to a block vindex, the index used to > + identify it in the VISITED array. Check that it's in range: > + neither ENTRY nor EXIT, but maybe one-past-the-end, to compute > + the visited array length. */ > + blknum num2idx (blknum n) { > + gcc_checking_assert (n >=3D NUM_FIXED_BLOCKS && n <=3D nblocks); > + return (n - NUM_FIXED_BLOCKS); > + } > + /* Return the block vindex for BB, that must not be ENTRY or > + EXIT. */ > + blknum bb2idx (basic_block bb) { > + gcc_checking_assert (bb !=3D ENTRY_BLOCK_PTR_FOR_FN (cfun) > + && bb !=3D EXIT_BLOCK_PTR_FOR_FN (cfun)); > + gcc_checking_assert (blknum (bb->index) < nblocks); > + return num2idx (bb->index); > + } > + > + /* Compute the type to be used for the VISITED array. */ > + tree vtype () > + { > + blknum n =3D num2idx (nblocks); > + return build_array_type_nelts (vword_type, > + (n + vword_bits - 1) / vword_bits); > + } > + > + /* Compute and return the index into VISITED for block BB. If BITP > + is non-NULL, also compute and store the bit mask corresponding to > + block BB in *BITP, so that (visited[index] & mask) tells whether > + BB was visited. */ > + tree vwordidx (basic_block bb, tree *bitp =3D NULL) > + { > + blknum idx =3D bb2idx (bb); > + if (bitp) > + { > + unsigned bit =3D idx % vword_bits; > + /* We don't need to adjust shifts to follow native bit > + endianness here, all of our uses of the CFG and visited > + bitmaps, whether at compile or runtime, are shifted bits on > + full words. This adjustment here would require a > + corresponding adjustment at runtime, which would be nothing > + but undesirable overhead for us. */ > + if (0 /* && BITS_BIG_ENDIAN */) > + bit =3D vword_bits - bit - 1; > + wide_int wbit =3D wi::set_bit_in_zero (bit, vword_bits); > + *bitp =3D wide_int_to_tree (vword_type, wbit); > + } > + return build_int_cst (vword_ptr, idx / vword_bits); > + } > + > + /* Return an expr to accesses the visited element that holds > + information about BB. If BITP is non-NULL, set it to the mask to > + tell which bit in that expr refers to BB. */ > + tree vword (basic_block bb, tree *bitp =3D NULL) > + { > + return build2 (MEM_REF, vword_type, > + build1 (ADDR_EXPR, vword_ptr, visited), > + int_const_binop (MULT_EXPR, vwordidx (bb, bitp), > + fold_convert (vword_ptr, > + TYPE_SIZE_UNIT > + (vword_type)))); > + } > + > + /* Return an expr that evaluates to true iff BB was marked as > + VISITED. Add any gimple stmts to SEQP. */ > + tree vindex (basic_block bb, gimple_seq *seqp) > + { > + if (bb =3D=3D ENTRY_BLOCK_PTR_FOR_FN (cfun) > + || bb =3D=3D EXIT_BLOCK_PTR_FOR_FN (cfun)) > + return boolean_true_node; > + > + tree bit, setme =3D vword (bb, &bit); > + tree temp =3D create_tmp_var (vword_type, ".cfrtemp"); > + > + gassign *vload =3D gimple_build_assign (temp, setme); > + gimple_seq_add_stmt (seqp, vload); > + > + gassign *vmask =3D gimple_build_assign (temp, BIT_AND_EXPR, temp, bi= t); > + gimple_seq_add_stmt (seqp, vmask); > + > + return build2 (NE_EXPR, boolean_type_node, > + temp, build_int_cst (vword_type, 0)); > + } > + > + /* Set the bit corresponding to BB in VISITED. Add to SEQ any > + required gimple stmts, and return SEQ, possibly modified. */ > + gimple_seq vset (basic_block bb, gimple_seq seq =3D NULL) > + { > + tree bit, setme =3D vword (bb, &bit); > + tree temp =3D create_tmp_var (vword_type, ".cfrtemp"); > + > + gassign *vload =3D gimple_build_assign (temp, setme); > + gimple_seq_add_stmt (&seq, vload); > + > + gassign *vbitset =3D gimple_build_assign (temp, BIT_IOR_EXPR, temp, = bit); > + gimple_seq_add_stmt (&seq, vbitset); > + > + gassign *vstore =3D gimple_build_assign (unshare_expr (setme), temp)= ; > + gimple_seq_add_stmt (&seq, vstore); > + > + /* Prevent stores into visited from being deferred, forcing > + subsequent bitsets to reload the word rather than reusing > + values already in register. The purpose is threefold: make the > + bitset get to memory in this block, so that control flow > + attacks in functions called in this block don't easily bypass > + the bitset; prevent the bitset word from being retained in a > + register across blocks, which could, in an attack scenario, > + make a later block set more than one bit; and prevent hoisting > + or sinking loads or stores of bitset words out of loops or even > + throughout functions, which could significantly weaken the > + verification. This is equivalent to making the bitsetting > + volatile within the function body, but without changing its > + type; making the bitset volatile would make inline checking far > + less optimizable for no reason. */ > + vec *inputs =3D NULL; > + vec *outputs =3D NULL; > + vec_safe_push (outputs, > + build_tree_list > + (build_tree_list > + (NULL_TREE, build_string (2, "=3Dm")), > + visited)); > + vec_safe_push (inputs, > + build_tree_list > + (build_tree_list > + (NULL_TREE, build_string (1, "m")), > + visited)); > + gasm *stabilize =3D gimple_build_asm_vec ("", inputs, outputs, > + NULL, NULL); > + gimple_seq_add_stmt (&seq, stabilize); > + > + return seq; > + } > + > +public: > + /* Prepare to add control flow redundancy testing to CFUN. */ > + rt_bb_visited (int checkpoints) > + : nblocks (n_basic_blocks_for_fn (cfun)), > + vword_type (NULL), ckseq (NULL), rtcfg (NULL) > + { > + /* If we've already added a declaration for the builtin checker, > + extract vword_type and vword_bits from its declaration. */ > + if (tree checkfn =3D builtin_decl_explicit (BUILT_IN___HARDCFR_CHECK= )) > + { > + tree check_arg_list =3D TYPE_ARG_TYPES (TREE_TYPE (checkfn)); > + tree vword_const_ptr_type =3D TREE_VALUE (TREE_CHAIN (check_arg_l= ist)); > + vword_type =3D TYPE_MAIN_VARIANT (TREE_TYPE (vword_const_ptr_type= )); > + vword_bits =3D tree_to_shwi (TYPE_SIZE (vword_type)); > + } > + /* Otherwise, select vword_bits, vword_type et al, and use it to > + declare the builtin checker. */ > + else > + { > + /* This setting needs to be kept in sync with libgcc/hardcfr.c. > + We aim for at least 28 bits, which enables us to refer to as > + many as 28 << 28 blocks in a function's CFG. That's way over > + 4G blocks. */ > + machine_mode VWORDmode; > + if (BITS_PER_UNIT >=3D 28) > + { > + VWORDmode =3D QImode; > + vword_bits =3D BITS_PER_UNIT; > + } > + else if (BITS_PER_UNIT >=3D 14) > + { > + VWORDmode =3D HImode; > + vword_bits =3D 2 * BITS_PER_UNIT; > + } > + else > + { > + VWORDmode =3D SImode; > + vword_bits =3D 4 * BITS_PER_UNIT; > + } > + > + vword_type =3D lang_hooks.types.type_for_mode (VWORDmode, 1); > + gcc_checking_assert (vword_bits =3D=3D tree_to_shwi (TYPE_SIZE > + (vword_type))); > + > + vword_type =3D build_variant_type_copy (vword_type); > + TYPE_ALIAS_SET (vword_type) =3D new_alias_set (); > + > + tree vword_const =3D build_qualified_type (vword_type, TYPE_QUAL_= CONST); > + tree vword_const_ptr =3D build_pointer_type (vword_const); > + tree type =3D build_function_type_list (void_type_node, sizetype, > + vword_const_ptr, vword_cons= t_ptr, > + NULL_TREE); > + tree decl =3D add_builtin_function_ext_scope > + ("__builtin___hardcfr_check", > + type, BUILT_IN___HARDCFR_CHECK, BUILT_IN_NORMAL, > + "__hardcfr_check", NULL_TREE); > + TREE_NOTHROW (decl) =3D true; > + set_builtin_decl (BUILT_IN___HARDCFR_CHECK, decl, true); > + } > + > + /* The checker uses a qualified pointer, so we can't reuse it, > + so build a new one. */ > + vword_ptr =3D build_pointer_type (vword_type); > + > + tree visited_type =3D vtype (); > + visited =3D create_tmp_var (visited_type, ".cfrvisited"); > + > + if (nblocks - NUM_FIXED_BLOCKS > blknum (param_hardcfr_max_inline_bl= ocks) > + || checkpoints > 1) > + { > + /* Make sure vword_bits is wide enough for the representation > + of nblocks in rtcfg. Compare with vword_bits << vword_bits, > + but avoiding overflows, shifting nblocks right instead. If > + vword_bits is wider than HOST_WIDE_INT, assume it fits, so > + as to avoid undefined shifts. */ > + gcc_assert (HOST_BITS_PER_WIDE_INT <=3D vword_bits > + || (((unsigned HOST_WIDE_INT)(num2idx (nblocks)) > + >> vword_bits) < vword_bits)); > + > + /* Build a terminator for the constructor list. */ > + rtcfg =3D build_tree_list (NULL_TREE, NULL_TREE); > + return; > + } > + > + ckfail =3D create_tmp_var (boolean_type_node, ".cfrfail"); > + ckpart =3D create_tmp_var (boolean_type_node, ".cfrpart"); > + ckinv =3D create_tmp_var (boolean_type_node, ".cfrinv"); > + ckblk =3D create_tmp_var (boolean_type_node, ".cfrblk"); > + > + gassign *ckfail_init =3D gimple_build_assign (ckfail, boolean_false_= node); > + gimple_seq_add_stmt (&ckseq, ckfail_init); > + } > + > + /* Insert SEQ before a resx or a call in INSBB. */ > + void insert_exit_check_in_block (gimple_seq seq, basic_block insbb) > + { > + gimple_stmt_iterator gsi =3D gsi_last_bb (insbb); > + > + while (!gsi_end_p (gsi)) > + if (is_a (gsi_stmt (gsi)) > + || is_a (gsi_stmt (gsi))) > + break; > + else > + gsi_prev (&gsi); > + > + gsi_insert_seq_before (&gsi, seq, GSI_SAME_STMT); > + } > + > + /* Insert SEQ on E. */ > + void insert_exit_check_on_edge (gimple_seq seq, edge e) > + { > + gsi_insert_seq_on_edge_immediate (e, seq); > + } > + > + /* Add checking code to CHK_EDGES and CHKCALL_BLOCKS, and > + initialization code on the entry edge. Before this point, the > + CFG has been undisturbed, and all the needed data has been > + collected and safely stowed. */ > + void check (chk_edges_t &chk_edges, > + int count_chkcall, auto_sbitmap const &chkcall_blocks) > + { > + /* If we're using out-of-line checking, create and statically > + initialize the CFG checking representation, generate the > + checker call for the checking sequence, and insert it in all > + exit edges, if there's more than one. If there's only one, we > + use the same logic as the inline case to insert the check > + sequence. */ > + if (rtcfg) > + { > + /* Unreverse the list, and drop the tail node turned into head. = */ > + rtcfg =3D TREE_CHAIN (nreverse (rtcfg)); > + > + /* Turn the indices stored in TREE_PURPOSE into separate > + nodes. It was useful to keep them together to enable > + combination of masks and for clear separation of > + terminators while constructing it, but now we have to turn > + it into a sequence of words. */ > + for (tree node =3D rtcfg; node; node =3D TREE_CHAIN (node)) > + { > + tree wordidx =3D TREE_PURPOSE (node); > + if (!wordidx) > + continue; > + > + TREE_PURPOSE (node) =3D NULL_TREE; > + TREE_CHAIN (node) =3D tree_cons (NULL_TREE, > + fold_convert (vword_type, word= idx), > + TREE_CHAIN (node)); > + } > + > + /* Build the static initializer for the array with the CFG > + representation for out-of-line checking. */ > + tree init =3D build_constructor_from_list (NULL_TREE, rtcfg); > + TREE_TYPE (init) =3D build_array_type_nelts (vword_type, > + CONSTRUCTOR_NELTS (ini= t)); > + char buf[32]; > + ASM_GENERATE_INTERNAL_LABEL (buf, "Lhardcfg", > + current_function_funcdef_no); > + rtcfg =3D build_decl (UNKNOWN_LOCATION, VAR_DECL, > + get_identifier (buf), > + TREE_TYPE (init)); > + TREE_READONLY (rtcfg) =3D 1; > + TREE_STATIC (rtcfg) =3D 1; > + TREE_ADDRESSABLE (rtcfg) =3D 1; > + TREE_USED (rtcfg) =3D 1; > + DECL_ARTIFICIAL (rtcfg) =3D 1; > + DECL_IGNORED_P (rtcfg) =3D 1; > + DECL_INITIAL (rtcfg) =3D init; > + make_decl_rtl (rtcfg); > + varpool_node::finalize_decl (rtcfg); > + > + /* Add the checker call to ckseq. */ > + gcall *call_chk =3D gimple_build_call (builtin_decl_explicit > + (BUILT_IN___HARDCFR_CHECK), = 3, > + build_int_cst (sizetype, > + num2idx (nblo= cks)), > + build1 (ADDR_EXPR, vword_ptr= , > + visited), > + build1 (ADDR_EXPR, vword_ptr= , > + rtcfg)); > + gimple_seq_add_stmt (&ckseq, call_chk); > + > + gimple *clobber =3D gimple_build_assign (visited, > + build_clobber > + (TREE_TYPE (visited))); > + gimple_seq_add_stmt (&ckseq, clobber); > + > + /* If we have multiple exit edges, insert (copies of) > + ckseq in all of them. */ > + for (int i =3D chk_edges.length (); i--; ) > + { > + gimple_seq seq =3D ckseq; > + /* Copy the sequence, unless we're dealing with the > + last edge (we're counting down to zero). */ > + if (i || count_chkcall) > + seq =3D gimple_seq_copy (seq); > + > + edge e =3D chk_edges[i]; > + > + if (dump_file) > + { > + if (e->dest =3D=3D EXIT_BLOCK_PTR_FOR_FN (cfun)) > + fprintf (dump_file, > + "Inserting out-of-line check in" > + " block %i's edge to exit.\n", > + e->src->index); > + else > + fprintf (dump_file, > + "Inserting out-of-line check in" > + " block %i's edge to postcheck block %i.\n", > + e->src->index, e->dest->index); > + } > + > + insert_exit_check_on_edge (seq, e); > + > + gcc_checking_assert (!bitmap_bit_p (chkcall_blocks, e->src->i= ndex)); > + } > + > + sbitmap_iterator it; > + unsigned i; > + EXECUTE_IF_SET_IN_BITMAP (chkcall_blocks, 0, i, it) > + { > + basic_block bb =3D BASIC_BLOCK_FOR_FN (cfun, i); > + > + gimple_seq seq =3D ckseq; > + gcc_checking_assert (count_chkcall > 0); > + if (--count_chkcall) > + seq =3D gimple_seq_copy (seq); > + > + if (dump_file) > + fprintf (dump_file, > + "Inserting out-of-line check before stmt in block = %i.\n", > + bb->index); > + > + insert_exit_check_in_block (seq, bb); > + } > + > + gcc_checking_assert (count_chkcall =3D=3D 0); > + } > + else > + { > + /* Inline checking requires a single exit edge. */ > + gimple *last =3D gimple_build_assign (visited, > + build_clobber > + (TREE_TYPE (visited))); > + gimple_seq_add_stmt (&ckseq, last); > + > + if (!count_chkcall) > + { > + edge e =3D single_pred_edge (EXIT_BLOCK_PTR_FOR_FN (cfun)); > + > + if (dump_file) > + { > + if (e->dest =3D=3D EXIT_BLOCK_PTR_FOR_FN (cfun)) > + fprintf (dump_file, > + "Inserting out-of-line check in" > + " block %i's edge to postcheck block %i.\n", > + e->src->index, e->dest->index); > + else > + fprintf (dump_file, > + "Inserting inline check in" > + " block %i's edge to exit.\n", > + e->src->index); > + } > + > + insert_exit_check_on_edge (ckseq, e); > + } > + else > + { > + gcc_checking_assert (count_chkcall =3D=3D 1); > + > + sbitmap_iterator it; > + unsigned i; > + EXECUTE_IF_SET_IN_BITMAP (chkcall_blocks, 0, i, it) > + { > + basic_block bb =3D BASIC_BLOCK_FOR_FN (cfun, i); > + > + gimple_seq seq =3D ckseq; > + gcc_checking_assert (count_chkcall > 0); > + if (--count_chkcall) > + seq =3D gimple_seq_copy (seq); > + > + if (dump_file) > + fprintf (dump_file, > + "Inserting inline check before stmt in block %= i.\n", > + bb->index); > + > + insert_exit_check_in_block (seq, bb); > + } > + > + gcc_checking_assert (count_chkcall =3D=3D 0); > + } > + > + /* The inserted ckseq computes CKFAIL at LAST. Now we have to > + conditionally trap on it. */ > + basic_block insbb =3D gimple_bb (last); > + > + /* Create a block with the unconditional trap. */ > + basic_block trp =3D create_empty_bb (insbb); > + gimple_stmt_iterator gsit =3D gsi_after_labels (trp); > + > + gcall *trap =3D gimple_build_call (builtin_decl_explicit > + (BUILT_IN_TRAP), 0); > + gsi_insert_before (&gsit, trap, GSI_SAME_STMT); > + > + if (BB_PARTITION (insbb)) > + BB_SET_PARTITION (trp, BB_COLD_PARTITION); > + > + if (current_loops) > + add_bb_to_loop (trp, current_loops->tree_root); > + > + /* Insert a conditional branch to the trap block. If the > + conditional wouldn't be the last stmt, split the block. */ > + gimple_stmt_iterator gsi =3D gsi_for_stmt (last); > + if (!gsi_one_before_end_p (gsi)) > + split_block (gsi_bb (gsi), gsi_stmt (gsi)); > + > + gcond *cond =3D gimple_build_cond (NE_EXPR, ckfail, > + fold_convert (TREE_TYPE (ckfail)= , > + boolean_false_node= ), > + NULL, NULL); > + gsi_insert_after (&gsi, cond, GSI_SAME_STMT); > + > + /* Adjust the edges. */ > + single_succ_edge (gsi_bb (gsi))->flags &=3D ~EDGE_FALLTHRU; > + single_succ_edge (gsi_bb (gsi))->flags |=3D EDGE_FALSE_VALUE; > + single_succ_edge (gsi_bb (gsi))->probability > + =3D profile_probability::always (); > + edge e =3D make_edge (gsi_bb (gsi), trp, EDGE_TRUE_VALUE); > + e->probability =3D profile_probability::never (); > + gcc_checking_assert (e->dest =3D=3D trp); > + gcc_checking_assert (!e->dest->count.initialized_p ()); > + e->dest->count =3D e->count (); > + > + /* Set the trap's dominator after splitting. */ > + if (dom_info_available_p (CDI_DOMINATORS)) > + set_immediate_dominator (CDI_DOMINATORS, trp, gimple_bb (last))= ; > + } > + > + /* Insert initializers for visited at the entry. Do this after > + other insertions, to avoid messing with block numbers. */ > + gimple_seq iseq =3D NULL; > + > + gcall *vinit =3D gimple_build_call (builtin_decl_explicit > + (BUILT_IN_MEMSET), 3, > + build1 (ADDR_EXPR, > + build_pointer_type > + (TREE_TYPE (visited)), > + visited), > + integer_zero_node, > + TYPE_SIZE_UNIT (TREE_TYPE (visited)= )); > + gimple_seq_add_stmt (&iseq, vinit); > + > + gsi_insert_seq_on_edge_immediate (single_succ_edge > + (ENTRY_BLOCK_PTR_FOR_FN (cfun)), > + iseq); > + } > + > + /* Push onto RTCFG a (mask, index) pair to test for IBB when BB is > + visited. XSELF is to be the ENTRY or EXIT block (depending on > + whether we're looking at preds or succs), to be remapped to BB > + because we can't represent them, and there's no point in testing > + them anyway. Return true if no further blocks need to be visited > + in the list, because we've already encountered a > + self-reference. */ > + bool > + push_rtcfg_pair (basic_block ibb, basic_block bb, > + basic_block xself) > + { > + /* We don't have a bit to test for the entry and exit > + blocks, but it is always visited, so we test for the > + block itself, which gets us the right result and > + enables the self-test optimization below. */ > + if (ibb =3D=3D xself) > + ibb =3D bb; > + > + tree mask, idx =3D vwordidx (ibb, &mask); > + /* Combine masks with the same idx, but not if we're going > + to optimize for self-test. */ > + if (ibb !=3D bb && TREE_PURPOSE (rtcfg) > + && tree_int_cst_equal (idx, TREE_PURPOSE (rtcfg))) > + TREE_VALUE (rtcfg) =3D int_const_binop (BIT_IOR_EXPR, mask, > + TREE_VALUE (rtcfg)); > + else > + rtcfg =3D tree_cons (idx, mask, rtcfg); > + > + /* For self-tests (i.e., tests that the block itself was > + also visited), testing anything else is pointless, > + because it's a tautology, so just drop other edges. */ > + if (ibb =3D=3D bb) > + { > + while (TREE_PURPOSE (TREE_CHAIN (rtcfg))) > + TREE_CHAIN (rtcfg) =3D TREE_CHAIN (TREE_CHAIN (rtcfg)); > + return true; > + } > + > + return false; > + } > + > + /* Add to CKSEQ stmts to clear CKPART if OBB is visited. */ > + void > + build_block_check (basic_block obb) > + { > + tree vobb =3D fold_convert (TREE_TYPE (ckblk), > + vindex (obb, &ckseq)); > + gassign *blkrunp =3D gimple_build_assign (ckblk, vobb); > + gimple_seq_add_stmt (&ckseq, blkrunp); > + > + gassign *blknotrunp =3D gimple_build_assign (ckinv, > + EQ_EXPR, > + ckblk, > + fold_convert > + (TREE_TYPE (ckblk), > + boolean_false_node)); > + gimple_seq_add_stmt (&ckseq, blknotrunp); > + > + gassign *andblk =3D gimple_build_assign (ckpart, > + BIT_AND_EXPR, > + ckpart, ckinv); > + gimple_seq_add_stmt (&ckseq, andblk); > + } > + > + /* Add to BB code to set its bit in VISITED, and add to RTCFG or > + CKSEQ the data or code needed to check BB's predecessors and > + successors. If CHECKPOINT, assume the block is a checkpoint, > + whether or not it has an edge to EXIT. If POSTCHECK, assume the > + block post-dominates checkpoints and therefore no bitmap setting > + or checks are to be performed in or for it. Do NOT change the > + CFG. */ > + void visit (basic_block bb, bool checkpoint, bool postcheck) > + { > + /* Set the bit in VISITED when entering the block. */ > + gimple_stmt_iterator gsi =3D gsi_after_labels (bb); > + if (!postcheck) > + gsi_insert_seq_before (&gsi, vset (bb), GSI_SAME_STMT); > + > + if (rtcfg) > + { > + if (!postcheck) > + { > + /* Build a list of (index, mask) terminated by (NULL, 0). > + Consolidate masks with the same index when they're > + adjacent. First, predecessors. Count backwards, because > + we're going to reverse the list. The order shouldn't > + matter, but let's not make it surprising. */ > + for (int i =3D EDGE_COUNT (bb->preds); i--; ) > + if (push_rtcfg_pair (EDGE_PRED (bb, i)->src, bb, > + ENTRY_BLOCK_PTR_FOR_FN (cfun))) > + break; > + } > + rtcfg =3D tree_cons (NULL_TREE, build_int_cst (vword_type, 0), rt= cfg); > + > + if (!postcheck) > + { > + /* Then, successors. */ > + if (!checkpoint > + || !push_rtcfg_pair (EXIT_BLOCK_PTR_FOR_FN (cfun), > + bb, EXIT_BLOCK_PTR_FOR_FN (cfun))) > + for (int i =3D EDGE_COUNT (bb->succs); i--; ) > + if (push_rtcfg_pair (EDGE_SUCC (bb, i)->dest, bb, > + EXIT_BLOCK_PTR_FOR_FN (cfun))) > + break; > + } > + rtcfg =3D tree_cons (NULL_TREE, build_int_cst (vword_type, 0), rt= cfg); > + } > + else if (!postcheck) > + { > + /* Schedule test to fail if the block was reached but somehow non= e > + of its predecessors were. */ > + tree bit =3D fold_convert (TREE_TYPE (ckpart), vindex (bb, &ckseq= )); > + gassign *blkrunp =3D gimple_build_assign (ckpart, bit); > + gimple_seq_add_stmt (&ckseq, blkrunp); > + > + for (int i =3D 0, e =3D EDGE_COUNT (bb->preds); i < e; i++) > + build_block_check (EDGE_PRED (bb, i)->src); > + gimple *orfailp =3D gimple_build_assign (ckfail, BIT_IOR_EXPR, > + ckfail, ckpart); > + gimple_seq_add_stmt (&ckseq, orfailp); > + > + /* Likewise for successors. */ > + gassign *blkruns =3D gimple_build_assign (ckpart, unshare_expr (b= it)); > + gimple_seq_add_stmt (&ckseq, blkruns); > + > + if (checkpoint) > + build_block_check (EXIT_BLOCK_PTR_FOR_FN (cfun)); > + for (int i =3D 0, e =3D EDGE_COUNT (bb->succs); i < e; i++) > + build_block_check (EDGE_SUCC (bb, i)->dest); > + > + gimple *orfails =3D gimple_build_assign (ckfail, BIT_IOR_EXPR, > + ckfail, ckpart); > + gimple_seq_add_stmt (&ckseq, orfails); > + } > + } > +}; > + > +/* Avoid checking before noreturn calls that are known (expected, > + really) to finish by throwing an exception, rather than by ending > + the program or looping forever. Such functions have to be > + annotated, with an attribute (expected_throw) or flag (ECF_XTHROW), > + so that exception-raising functions, such as C++'s __cxa_throw, > + __cxa_rethrow, and Ada's gnat_rcheck_*, gnat_reraise*, > + ada.exception.raise_exception*, and the language-independent > + unwinders could be detected here and handled differently from other > + noreturn functions. */ > +static bool > +always_throwing_noreturn_call_p (gimple *stmt) > +{ > + if (!is_a (stmt)) > + return is_a (stmt); > + > + gcall *call =3D as_a (stmt); > + return (gimple_call_noreturn_p (call) > + && gimple_call_expected_throw_p (call)); > +} > + > +/* Control flow redundancy hardening: record the execution path, and > + verify at exit that an expect path was taken. */ > + > +unsigned int > +pass_harden_control_flow_redundancy::execute (function *fun) > +{ > + bool const check_at_escaping_exceptions > + =3D (flag_exceptions > + && flag_harden_control_flow_redundancy_check_exceptions); > + bool const check_before_noreturn_calls > + =3D flag_harden_control_flow_redundancy_check_noreturn > HCFRNR_NEVE= R; > + bool const check_before_nothrow_noreturn_calls > + =3D (check_before_noreturn_calls > + && flag_harden_control_flow_redundancy_check_noreturn >=3D HCFRNR= _NOTHROW); > + bool const check_before_throwing_noreturn_calls > + =3D (flag_exceptions > + && check_before_noreturn_calls > + && flag_harden_control_flow_redundancy_check_noreturn > HCFRNR_NO= THROW); > + bool const check_before_always_throwing_noreturn_calls > + =3D (flag_exceptions > + && check_before_noreturn_calls > + && flag_harden_control_flow_redundancy_check_noreturn >=3D HCFRNR= _ALWAYS); > + basic_block bb; > + basic_block bb_eh_cleanup =3D NULL; > + > + if (flag_harden_control_flow_redundancy_skip_leaf) > + { > + bool found_calls_p =3D false; > + > + FOR_EACH_BB_FN (bb, fun) > + { > + for (gimple_stmt_iterator gsi =3D gsi_last_bb (bb); > + !gsi_end_p (gsi); gsi_prev (&gsi)) > + if (is_a (gsi_stmt (gsi))) > + { > + found_calls_p =3D true; > + break; > + } > + if (found_calls_p) > + break; > + } > + > + if (!found_calls_p) > + { > + if (dump_file) > + fprintf (dump_file, > + "Disabling CFR for leaf function, as requested\n"); > + > + return 0; > + } > + } > + > + if (check_at_escaping_exceptions) > + { > + int lp_eh_cleanup =3D -1; > + > + /* Record the preexisting blocks, to avoid visiting newly-created > + blocks. */ > + auto_sbitmap to_visit (last_basic_block_for_fn (fun)); > + bitmap_clear (to_visit); > + > + FOR_EACH_BB_FN (bb, fun) > + bitmap_set_bit (to_visit, bb->index); > + > + /* Scan the blocks for stmts with escaping exceptions, that > + wouldn't be denoted in the CFG, and associate them with an > + empty cleanup handler around the whole function. Walk > + backwards, so that even when we split the block, */ > + sbitmap_iterator it; > + unsigned i; > + EXECUTE_IF_SET_IN_BITMAP (to_visit, 0, i, it) > + { > + bb =3D BASIC_BLOCK_FOR_FN (fun, i); > + > + for (gimple_stmt_iterator gsi =3D gsi_last_bb (bb); > + !gsi_end_p (gsi); gsi_prev (&gsi)) > + { > + gimple *stmt =3D gsi_stmt (gsi); > + if (!stmt_could_throw_p (fun, stmt)) > + continue; > + > + /* If it must not throw, or if it already has a handler, > + we need not worry about it. */ > + if (lookup_stmt_eh_lp (stmt) !=3D 0) > + continue; > + > + /* Don't split blocks at, nor add EH edges to, tail > + calls, we will add verification before the call > + anyway. */ > + if (is_a (stmt) > + && (gimple_call_must_tail_p (as_a (stmt)) > + || gimple_call_tail_p (as_a (stmt)) > + || returning_call_p (as_a (stmt)))) > + continue; > + > + if (!gsi_one_before_end_p (gsi)) > + split_block (bb, stmt); > + /* A resx or noreturn call needs not be associated with > + the cleanup handler if we're going to add checking > + before it. We only test cases that didn't require > + block splitting because noreturn calls would always > + be at the end of blocks, and we test for zero > + successors because if there is an edge, it's not > + noreturn, as any EH edges would have already been > + caught by the lookup_stmt_eh_lp test above. */ > + else if (check_before_noreturn_calls > + && EDGE_COUNT (bb->succs) =3D=3D 0 > + && (is_a (stmt) > + ? check_before_always_throwing_noreturn_calls > + : (!is_a (stmt) > + || !gimple_call_noreturn_p (stmt)) > + ? (gcc_unreachable (), false) > + : (!flag_exceptions > + || gimple_call_nothrow_p (as_a (s= tmt))) > + ? check_before_nothrow_noreturn_calls > + : always_throwing_noreturn_call_p (stmt) > + ? check_before_always_throwing_noreturn_calls > + : check_before_throwing_noreturn_calls)) > + { > + if (dump_file) > + { > + fprintf (dump_file, > + "Bypassing cleanup for noreturn stmt" > + " in block %i:\n", > + bb->index); > + print_gimple_stmt (dump_file, stmt, 0); > + } > + continue; > + } > + > + if (!bb_eh_cleanup) > + { > + bb_eh_cleanup =3D create_empty_bb (bb); > + if (dom_info_available_p (CDI_DOMINATORS)) > + set_immediate_dominator (CDI_DOMINATORS, bb_eh_cleanu= p, bb); > + if (current_loops) > + add_bb_to_loop (bb_eh_cleanup, current_loops->tree_ro= ot); > + > + /* Make the new block an EH cleanup for the call. */ > + eh_region new_r =3D gen_eh_region_cleanup (NULL); > + eh_landing_pad lp =3D gen_eh_landing_pad (new_r); > + tree label =3D gimple_block_label (bb_eh_cleanup); > + lp->post_landing_pad =3D label; > + EH_LANDING_PAD_NR (label) =3D lp_eh_cleanup =3D lp->ind= ex; > + > + /* Just propagate the exception. > + We will later insert the verifier call. */ > + gimple_stmt_iterator ehgsi; > + ehgsi =3D gsi_after_labels (bb_eh_cleanup); > + gresx *resx =3D gimple_build_resx (new_r->index); > + gsi_insert_before (&ehgsi, resx, GSI_SAME_STMT); > + > + if (dump_file) > + fprintf (dump_file, > + "Created cleanup block %i:\n", > + bb_eh_cleanup->index); > + } > + else if (dom_info_available_p (CDI_DOMINATORS)) > + { > + basic_block immdom; > + immdom =3D get_immediate_dominator (CDI_DOMINATORS, > + bb_eh_cleanup); > + if (!dominated_by_p (CDI_DOMINATORS, bb, immdom)) > + { > + immdom =3D nearest_common_dominator (CDI_DOMINATORS= , > + immdom, bb); > + set_immediate_dominator (CDI_DOMINATORS, > + bb_eh_cleanup, immdom); > + } > + } > + > + if (dump_file) > + { > + fprintf (dump_file, > + "Associated cleanup block with stmt in block %= i:\n", > + bb->index); > + print_gimple_stmt (dump_file, stmt, 0); > + } > + > + add_stmt_to_eh_lp (stmt, lp_eh_cleanup); > + /* Finally, wire the EH cleanup block into the CFG. */ > + edge neeh =3D make_eh_edges (stmt); > + neeh->probability =3D profile_probability::never (); > + gcc_checking_assert (neeh->dest =3D=3D bb_eh_cleanup); > + if (neeh->dest->count.initialized_p ()) > + neeh->dest->count +=3D neeh->count (); > + else > + neeh->dest->count =3D neeh->count (); > + } > + } > + > + if (bb_eh_cleanup) > + { > + /* A cfg_cleanup after bb_eh_cleanup makes for a more compact > + rtcfg, and it avoids bb numbering differences when we split > + blocks because of trailing debug insns only. */ > + cleanup_tree_cfg (); > + gcc_checking_assert (EDGE_COUNT (bb_eh_cleanup->succs) =3D=3D 0= ); > + } > + } > + > + /* These record blocks with calls that are to be preceded by > + checkpoints, such as noreturn calls (if so chosen), must-tail > + calls, potential early-marked tail calls, and returning calls (if > + so chosen). */ > + int count_chkcall =3D 0; > + auto_sbitmap chkcall_blocks (last_basic_block_for_fn (fun)); > + bitmap_clear (chkcall_blocks); > + > + /* We wish to add verification at blocks without successors, such as > + noreturn calls (raising or not) and the reraise at the cleanup > + block, but not other reraises: they will go through the cleanup > + block. */ > + if (check_before_noreturn_calls) > + FOR_EACH_BB_FN (bb, fun) > + { > + gimple_stmt_iterator gsi =3D gsi_last_bb (bb); > + if (gsi_end_p (gsi)) > + continue; > + gimple *stmt =3D gsi_stmt (gsi); > + > + if (EDGE_COUNT (bb->succs) =3D=3D 0) > + { > + /* A stmt at the end of a block without any successors is > + either a resx or a noreturn call without a local > + handler. Check that it's one of the desired > + checkpoints. */ > + if (flag_exceptions && is_a (stmt) > + ? (check_before_always_throwing_noreturn_calls > + || bb =3D=3D bb_eh_cleanup) > + : (!is_a (stmt) > + || !gimple_call_noreturn_p (stmt)) > + ? (stmt_can_make_abnormal_goto (stmt) > + /* ??? Check before indirect nonlocal goto, or > + calls thereof? */ > + ? false > + /* Catch cases in which successors would be > + expected. */ > + : (gcc_unreachable (), false)) > + : (!flag_exceptions > + || gimple_call_nothrow_p (as_a (stmt))) > + ? check_before_nothrow_noreturn_calls > + : always_throwing_noreturn_call_p (stmt) > + ? check_before_always_throwing_noreturn_calls > + : check_before_throwing_noreturn_calls) > + { > + if (dump_file) > + { > + fprintf (dump_file, > + "Scheduling check before stmt" > + " in succ-less block %i:\n", > + bb->index); > + print_gimple_stmt (dump_file, stmt, 0); > + } > + > + if (bitmap_set_bit (chkcall_blocks, bb->index)) > + count_chkcall++; > + else > + gcc_unreachable (); > + } > + continue; > + } > + > + /* If there are no exceptions, it would seem like any noreturn > + call must have zero successor edges, but __builtin_return > + gets successor edges. We don't want to handle it here, it > + will be dealt with in sibcall_search_preds. Otherwise, > + check for blocks without non-EH successors, but skip those > + with resx stmts and edges (i.e., those other than that in > + bb_eh_cleanup), since those will go through bb_eh_cleanup, > + that will have been counted as noreturn above because it > + has no successors. */ > + gcc_checking_assert (bb !=3D bb_eh_cleanup > + || !check_at_escaping_exceptions); > + if (flag_exceptions && is_a (stmt) > + ? check_before_always_throwing_noreturn_calls > + : (!is_a (stmt) > + || !gimple_call_noreturn_p (stmt)) > + ? false > + : (!flag_exceptions > + || gimple_call_nothrow_p (as_a (stmt))) > + ? false /* rather than check_before_nothrow_noreturn_calls */ > + : always_throwing_noreturn_call_p (stmt) > + ? check_before_always_throwing_noreturn_calls > + : check_before_throwing_noreturn_calls) > + { > + gcc_checking_assert (single_succ_p (bb) > + && (single_succ_edge (bb)->flags & EDGE_= EH)); > + > + if (dump_file) > + { > + fprintf (dump_file, > + "Scheduling check before stmt" > + " in EH-succ block %i:\n", > + bb->index); > + print_gimple_stmt (dump_file, stmt, 0); > + } > + > + if (bitmap_set_bit (chkcall_blocks, bb->index)) > + count_chkcall++; > + else > + gcc_unreachable (); > + } > + } > + else if (bb_eh_cleanup) > + { > + if (bitmap_set_bit (chkcall_blocks, bb_eh_cleanup->index)) > + count_chkcall++; > + else > + gcc_unreachable (); > + } > + > + gcc_checking_assert (!bb_eh_cleanup > + || bitmap_bit_p (chkcall_blocks, bb_eh_cleanup->in= dex)); > + > + /* If we don't have edges to exit nor noreturn calls (including the > + cleanup reraise), then we may skip instrumentation: that would > + amount to a function that ends with an infinite loop. */ > + if (!count_chkcall > + && EDGE_COUNT (EXIT_BLOCK_PTR_FOR_FN (fun)->preds) =3D=3D 0) > + { > + if (dump_file) > + fprintf (dump_file, > + "Disabling CFR, no exit paths to check\n"); > + > + return 0; > + } > + > + /* Search for must-tail calls, early-marked potential tail calls, > + and, if requested, returning calls. As we introduce early > + checks, */ > + int count_postchk =3D 0; > + auto_sbitmap postchk_blocks (last_basic_block_for_fn (fun)); > + bitmap_clear (postchk_blocks); > + chk_edges_t chk_edges; > + hardcfr_sibcall_search_preds (EXIT_BLOCK_PTR_FOR_FN (fun), chk_edges, > + count_chkcall, chkcall_blocks, > + count_postchk, postchk_blocks, > + NULL); > + > + rt_bb_visited vstd (chk_edges.length () + count_chkcall); > + > + auto_sbitmap combined_blocks (last_basic_block_for_fn (fun)); > + bitmap_copy (combined_blocks, chkcall_blocks); > + int i; > + edge *e; > + FOR_EACH_VEC_ELT (chk_edges, i, e) > + if (!bitmap_set_bit (combined_blocks, (*e)->src->index)) > + /* There may be multiple chk_edges with the same src block; > + guard againt overlaps with chkcall_blocks only. */ > + gcc_assert (!bitmap_bit_p (chkcall_blocks, (*e)->src->index)); > + > + /* Visit blocks in index order, because building rtcfg depends on > + that. Blocks must be compact, which the cleanup_cfg requirement > + ensures. This would also enable FOR_EACH_BB_FN to be used to > + iterate in index order, but bb_eh_cleanup block splits and > + insertions changes that. */ > + gcc_checking_assert (n_basic_blocks_for_fn (fun) > + =3D=3D last_basic_block_for_fn (fun)); > + for (int i =3D NUM_FIXED_BLOCKS; i < n_basic_blocks_for_fn (fun); i++) > + { > + bb =3D BASIC_BLOCK_FOR_FN (fun, i); > + gcc_checking_assert (bb->index =3D=3D i); > + vstd.visit (bb, bitmap_bit_p (combined_blocks, i), > + bitmap_bit_p (postchk_blocks, i)); > + } > + > + vstd.check (chk_edges, count_chkcall, chkcall_blocks); > + > + return > + TODO_update_ssa > + | TODO_cleanup_cfg > + | TODO_verify_il; > +} > + > +/* Instantiate a hardcfr pass. */ > + > +gimple_opt_pass * > +make_pass_harden_control_flow_redundancy (gcc::context *ctxt) > +{ > + return new pass_harden_control_flow_redundancy (ctxt); > +} > diff --git a/gcc/gimple.cc b/gcc/gimple.cc > index 46f28784e0721..7924d900b358e 100644 > --- a/gcc/gimple.cc > +++ b/gcc/gimple.cc > @@ -399,6 +399,10 @@ gimple_build_call_from_tree (tree t, tree fnptrtype) > gimple_call_set_from_thunk (call, CALL_FROM_THUNK_P (t)); > gimple_call_set_va_arg_pack (call, CALL_EXPR_VA_ARG_PACK (t)); > gimple_call_set_nothrow (call, TREE_NOTHROW (t)); > + if (fndecl) > + gimple_call_set_expected_throw (call, > + flags_from_decl_or_type (fndecl) > + & ECF_XTHROW); > gimple_call_set_by_descriptor (call, CALL_EXPR_BY_DESCRIPTOR (t)); > copy_warning (call, t); > > @@ -1550,6 +1554,8 @@ gimple_call_flags (const gimple *stmt) > > if (stmt->subcode & GF_CALL_NOTHROW) > flags |=3D ECF_NOTHROW; > + if (stmt->subcode & GF_CALL_XTHROW) > + flags |=3D ECF_XTHROW; > > if (stmt->subcode & GF_CALL_BY_DESCRIPTOR) > flags |=3D ECF_BY_DESCRIPTOR; > diff --git a/gcc/gimple.h b/gcc/gimple.h > index 2d0ac103636d0..1b0cd4b8ad890 100644 > --- a/gcc/gimple.h > +++ b/gcc/gimple.h > @@ -150,6 +150,7 @@ enum gf_mask { > GF_CALL_BY_DESCRIPTOR =3D 1 << 10, > GF_CALL_NOCF_CHECK =3D 1 << 11, > GF_CALL_FROM_NEW_OR_DELETE =3D 1 << 12, > + GF_CALL_XTHROW =3D 1 << 13, > GF_OMP_PARALLEL_COMBINED =3D 1 << 0, > GF_OMP_TASK_TASKLOOP =3D 1 << 0, > GF_OMP_TASK_TASKWAIT =3D 1 << 1, > @@ -3561,6 +3562,28 @@ gimple_call_nothrow_p (gcall *s) > return (gimple_call_flags (s) & ECF_NOTHROW) !=3D 0; > } > > +/* If EXPECTED_THROW_P is true, GIMPLE_CALL S is a call that is known > + to be more likely to throw than to run forever, terminate the > + program or return by other means. */ > + > +static inline void > +gimple_call_set_expected_throw (gcall *s, bool expected_throw_p) > +{ > + if (expected_throw_p) > + s->subcode |=3D GF_CALL_XTHROW; > + else > + s->subcode &=3D ~GF_CALL_XTHROW; > +} > + > +/* Return true if S is a call that is more likely to end by > + propagating an exception than by other means. */ > + > +static inline bool > +gimple_call_expected_throw_p (gcall *s) > +{ > + return (gimple_call_flags (s) & ECF_XTHROW) !=3D 0; > +} > + > /* If FOR_VAR is true, GIMPLE_CALL S is a call to builtin_alloca that > is known to be emitted for VLA objects. Those are wrapped by > stack_save/stack_restore calls and hence can't lead to unbounded > diff --git a/gcc/params.opt b/gcc/params.opt > index fffa8b1bc64df..f1202abc00d09 100644 > --- a/gcc/params.opt > +++ b/gcc/params.opt > @@ -174,6 +174,14 @@ Maximum number of arrays per SCoP. > Common Joined UInteger Var(param_graphite_max_nb_scop_params) Init(10) P= aram Optimization > Maximum number of parameters in a SCoP. > > +-param=3Dhardcfr-max-blocks=3D > +Common Joined UInteger Var(param_hardcfr_max_blocks) Init(0) Param Optim= ization > +Maximum number of blocks for -fharden-control-flow-redundancy. > + > +-param=3Dhardcfr-max-inline-blocks=3D > +Common Joined UInteger Var(param_hardcfr_max_inline_blocks) Init(16) Par= am Optimization > +Maximum number of blocks for in-line -fharden-control-flow-redundancy. > + > -param=3Dhash-table-verification-limit=3D > Common Joined UInteger Var(param_hash_table_verification_limit) Init(10)= Param > The number of elements for which hash table verification is done for eac= h searched element. > diff --git a/gcc/passes.def b/gcc/passes.def > index df7965dc50f6a..1e1950bdb39cb 100644 > --- a/gcc/passes.def > +++ b/gcc/passes.def > @@ -193,6 +193,7 @@ along with GCC; see the file COPYING3. If not see > NEXT_PASS (pass_omp_device_lower); > NEXT_PASS (pass_omp_target_link); > NEXT_PASS (pass_adjust_alignment); > + NEXT_PASS (pass_harden_control_flow_redundancy); > NEXT_PASS (pass_all_optimizations); > PUSH_INSERT_PASSES_WITHIN (pass_all_optimizations) > NEXT_PASS (pass_remove_cgraph_callee_edges); > diff --git a/gcc/testsuite/c-c++-common/harden-cfr-noret-never-O0.c b/gcc= /testsuite/c-c++-common/harden-cfr-noret-never-O0.c > new file mode 100644 > index 0000000000000..a6992eb9f8e6d > --- /dev/null > +++ b/gcc/testsuite/c-c++-common/harden-cfr-noret-never-O0.c > @@ -0,0 +1,12 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-noretu= rn-calls=3Dnever -O0 -fdump-tree-hardcfr -ffat-lto-objects" } */ > + > +/* Check that we don't insert checking before noreturn calls. -O0 is te= sted > + separately because h is not found to be noreturn without optimization= . */ > + > +#include "torture/harden-cfr-noret.c" > + > +/* No out-of-line checks. */ > +/* { dg-final { scan-tree-dump-times "hardcfr_check" 0 "hardcfr" } } */ > +/* Only one inline check at the end of f and of h2. */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 2 "hardcfr" } } */ > diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-abrt-always.c = b/gcc/testsuite/c-c++-common/torture/harden-cfr-abrt-always.c > new file mode 100644 > index 0000000000000..26c0f27071627 > --- /dev/null > +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-abrt-always.c > @@ -0,0 +1,11 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-noretu= rn-calls=3Dalways -fdump-tree-hardcfr -ffat-lto-objects" } */ > + > +/* Check the noreturn handling of a builtin call with always. */ > + > +#include "harden-cfr-abrt.c" > + > +/* Out-of-line checking, before both builtin_abort and return in f. */ > +/* { dg-final { scan-tree-dump-times "__hardcfr_check" 2 "hardcfr" } } *= / > +/* Inline checking before builtin_abort in g. */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 1 "hardcfr" } } */ > diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-abrt-never.c b= /gcc/testsuite/c-c++-common/torture/harden-cfr-abrt-never.c > new file mode 100644 > index 0000000000000..a9eca9893bb0e > --- /dev/null > +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-abrt-never.c > @@ -0,0 +1,11 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-noretu= rn-calls=3Dnever -fdump-tree-hardcfr -ffat-lto-objects" } */ > + > +/* Check the noreturn handling of a builtin call with never. */ > + > +#include "harden-cfr-abrt.c" > + > +/* No out-of-line checking. */ > +/* { dg-final { scan-tree-dump-times "__hardcfr_check" 0 "hardcfr" } } *= / > +/* Inline checking only before return in f. */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 1 "hardcfr" } } */ > diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-abrt-no-xthrow= .c b/gcc/testsuite/c-c++-common/torture/harden-cfr-abrt-no-xthrow.c > new file mode 100644 > index 0000000000000..eb7589f6d38c5 > --- /dev/null > +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-abrt-no-xthrow.c > @@ -0,0 +1,11 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-noretu= rn-calls=3Dno-xthrow -fdump-tree-hardcfr -ffat-lto-objects" } */ > + > +/* Check the noreturn handling of a builtin call with no-xthrow. */ > + > +#include "harden-cfr-abrt.c" > + > +/* Out-of-line checking, before both builtin_abort and return in f. */ > +/* { dg-final { scan-tree-dump-times "__hardcfr_check" 2 "hardcfr" } } *= / > +/* Inline checking before builtin_abort in g. */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 1 "hardcfr" } } */ > diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-abrt-nothrow.c= b/gcc/testsuite/c-c++-common/torture/harden-cfr-abrt-nothrow.c > new file mode 100644 > index 0000000000000..24363bdfe5721 > --- /dev/null > +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-abrt-nothrow.c > @@ -0,0 +1,11 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-noretu= rn-calls=3Dnothrow -fdump-tree-hardcfr -ffat-lto-objects" } */ > + > +/* Check the noreturn handling of a builtin call with =3Dnothrow. */ > + > +#include "harden-cfr-abrt.c" > + > +/* Out-of-line checking, before both builtin_abort and return in f. */ > +/* { dg-final { scan-tree-dump-times "__hardcfr_check" 2 "hardcfr" } } *= / > +/* Inline checking before builtin_abort in g. */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 1 "hardcfr" } } */ > diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-abrt.c b/gcc/t= estsuite/c-c++-common/torture/harden-cfr-abrt.c > new file mode 100644 > index 0000000000000..1ed727317f138 > --- /dev/null > +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-abrt.c > @@ -0,0 +1,19 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fdump-tree-hardcfr -f= fat-lto-objects" } */ > + > +/* Check the noreturn handling of a builtin call. */ > + > +int f(int i) { > + if (!i) > + __builtin_abort (); > + return i; > +} > + > +int g() { > + __builtin_abort (); > +} > + > +/* Out-of-line checking, before both builtin_abort and return in f. */ > +/* { dg-final { scan-tree-dump-times "__hardcfr_check" 2 "hardcfr" } } *= / > +/* Inline checking before builtin_return in g. */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 1 "hardcfr" } } */ > diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-always.c b/gcc= /testsuite/c-c++-common/torture/harden-cfr-always.c > new file mode 100644 > index 0000000000000..6e0767aad69f2 > --- /dev/null > +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-always.c > @@ -0,0 +1,13 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-noretu= rn-calls=3Dalways -fdump-tree-hardcfr --param hardcfr-max-blocks=3D9 --para= m hardcfr-max-inline-blocks=3D5 -ffat-lto-objects -w" } */ > + > +/* Check the instrumentation and the parameters with checking before > + all noreturn calls. */ > + > +#include "harden-cfr.c" > + > +/* Inlined checking thus trap for f. */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 1 "hardcfr" } } */ > +/* Out-of-line checking for g (param), and before both noreturn calls in= main. */ > +/* { dg-final { scan-tree-dump-times "__hardcfr_check" 3 "hardcfr" } } *= / > +/* No checking for h (too many blocks). */ > diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-bret-always.c = b/gcc/testsuite/c-c++-common/torture/harden-cfr-bret-always.c > new file mode 100644 > index 0000000000000..779896c60e846 > --- /dev/null > +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-bret-always.c > @@ -0,0 +1,13 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-noretu= rn-calls=3Dalways -fdump-tree-hardcfr -ffat-lto-objects" } */ > + > +/* Check that, even enabling all checks before noreturn calls (leaving > + returning calls enabled), we get checks before __builtin_return witho= ut > + duplication (__builtin_return is both noreturn and a returning call).= */ > + > +#include "harden-cfr-bret.c" > + > +/* Out-of-line checking, before both builtin_return and return in f. */ > +/* { dg-final { scan-tree-dump-times "__hardcfr_check" 2 "hardcfr" } } *= / > +/* Inline checking before builtin_return in g. */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 1 "hardcfr" } } */ > diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-bret-never.c b= /gcc/testsuite/c-c++-common/torture/harden-cfr-bret-never.c > new file mode 100644 > index 0000000000000..49ce17f5b937c > --- /dev/null > +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-bret-never.c > @@ -0,0 +1,13 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-noretu= rn-calls=3Dnever -fdump-tree-hardcfr -ffat-lto-objects" } */ > + > +/* Check that, even enabling checks before never noreturn calls (leaving > + returning calls enabled), we get checks before __builtin_return witho= ut > + duplication (__builtin_return is both noreturn and a returning call).= */ > + > +#include "harden-cfr-bret.c" > + > +/* Out-of-line checking, before both builtin_return and return in f. */ > +/* { dg-final { scan-tree-dump-times "__hardcfr_check" 2 "hardcfr" } } *= / > +/* Inline checking before builtin_return in g. */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 1 "hardcfr" } } */ > diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-bret-no-xthrow= .c b/gcc/testsuite/c-c++-common/torture/harden-cfr-bret-no-xthrow.c > new file mode 100644 > index 0000000000000..78e5bf4143927 > --- /dev/null > +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-bret-no-xthrow.c > @@ -0,0 +1,14 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-noretu= rn-calls=3Dno-xthrow -fdump-tree-hardcfr -ffat-lto-objects" } */ > + > +/* Check that, even enabling checks before no-xthrow-throwing noreturn c= alls > + (leaving returning calls enabled), we get checks before __builtin_ret= urn > + without duplication (__builtin_return is both noreturn and a returnin= g > + call). */ > + > +#include "harden-cfr-bret.c" > + > +/* Out-of-line checking, before both builtin_return and return in f. */ > +/* { dg-final { scan-tree-dump-times "__hardcfr_check" 2 "hardcfr" } } *= / > +/* Inline checking before builtin_return in g. */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 1 "hardcfr" } } */ > diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-bret-noopt.c b= /gcc/testsuite/c-c++-common/torture/harden-cfr-bret-noopt.c > new file mode 100644 > index 0000000000000..1512614791ff2 > --- /dev/null > +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-bret-noopt.c > @@ -0,0 +1,12 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-noretu= rn-calls=3Dnever -fno-hardcfr-check-returning-calls -fdump-tree-hardcfr -ff= at-lto-objects" } */ > + > +/* Check that, even disabling checks before both noreturn and returning > + calls, we still get checks before __builtin_return. */ > + > +#include "harden-cfr-bret.c" > + > +/* Out-of-line checking, before both builtin_return and return in f. */ > +/* { dg-final { scan-tree-dump-times "__hardcfr_check" 2 "hardcfr" } } *= / > +/* Inline checking before builtin_return in g. */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 1 "hardcfr" } } */ > diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-bret-noret.c b= /gcc/testsuite/c-c++-common/torture/harden-cfr-bret-noret.c > new file mode 100644 > index 0000000000000..fd95bb7e3e334 > --- /dev/null > +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-bret-noret.c > @@ -0,0 +1,12 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fno-hardcfr-check-ret= urning-calls -fdump-tree-hardcfr -ffat-lto-objects" } */ > + > +/* Check that, even disabling checks before returning calls (leaving nor= eturn > + calls enabled), we still get checks before __builtin_return. */ > + > +#include "harden-cfr-bret.c" > + > +/* Out-of-line checking, before both builtin_return and return in f. */ > +/* { dg-final { scan-tree-dump-times "__hardcfr_check" 2 "hardcfr" } } *= / > +/* Inline checking before builtin_return in g. */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 1 "hardcfr" } } */ > diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-bret-nothrow.c= b/gcc/testsuite/c-c++-common/torture/harden-cfr-bret-nothrow.c > new file mode 100644 > index 0000000000000..c5c361234c499 > --- /dev/null > +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-bret-nothrow.c > @@ -0,0 +1,13 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-noretu= rn-calls=3Dnothrow -fdump-tree-hardcfr -ffat-lto-objects" } */ > + > +/* Check that, even enabling checks before nothrow noreturn calls (leavi= ng > + returning calls enabled), we get checks before __builtin_return witho= ut > + duplication (__builtin_return is both noreturn and a returning call).= */ > + > +#include "harden-cfr-bret.c" > + > +/* Out-of-line checking, before both builtin_return and return in f. */ > +/* { dg-final { scan-tree-dump-times "__hardcfr_check" 2 "hardcfr" } } *= / > +/* Inline checking before builtin_return in g. */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 1 "hardcfr" } } */ > diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-bret-retcl.c b= /gcc/testsuite/c-c++-common/torture/harden-cfr-bret-retcl.c > new file mode 100644 > index 0000000000000..137dfbb95d6bb > --- /dev/null > +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-bret-retcl.c > @@ -0,0 +1,12 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-noretu= rn-calls=3Dnever -fdump-tree-hardcfr -ffat-lto-objects" } */ > + > +/* Check that, even disabling checks before noreturn calls (leaving retu= rning > + calls enabled), we still get checks before __builtin_return. */ > + > +#include "harden-cfr-bret.c" > + > +/* Out-of-line checking, before both builtin_return and return in f. */ > +/* { dg-final { scan-tree-dump-times "__hardcfr_check" 2 "hardcfr" } } *= / > +/* Inline checking before builtin_return in g. */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 1 "hardcfr" } } */ > diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-bret.c b/gcc/t= estsuite/c-c++-common/torture/harden-cfr-bret.c > new file mode 100644 > index 0000000000000..b459ff6b86491 > --- /dev/null > +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-bret.c > @@ -0,0 +1,17 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fdump-tree-hardcfr -f= fat-lto-objects" } */ > + > +int f(int i) { > + if (i) > + __builtin_return (&i); > + return i; > +} > + > +int g(int i) { > + __builtin_return (&i); > +} > + > +/* Out-of-line checking, before both builtin_return and return in f. */ > +/* { dg-final { scan-tree-dump-times "__hardcfr_check" 2 "hardcfr" } } *= / > +/* Inline checking before builtin_return in g. */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 1 "hardcfr" } } */ > diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-never.c b/gcc/= testsuite/c-c++-common/torture/harden-cfr-never.c > new file mode 100644 > index 0000000000000..7fe0bb4a66307 > --- /dev/null > +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-never.c > @@ -0,0 +1,13 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-noretu= rn-calls=3Dnever -fdump-tree-hardcfr --param hardcfr-max-blocks=3D9 --param= hardcfr-max-inline-blocks=3D5 -ffat-lto-objects -w" } */ > + > +/* Check the instrumentation and the parameters without checking before > + noreturn calls. */ > + > +#include "harden-cfr.c" > + > +/* Inlined checking thus trap for f. */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 1 "hardcfr" } } */ > +/* Out-of-line checking for g (param). */ > +/* { dg-final { scan-tree-dump-times "__hardcfr_check" 1 "hardcfr" } } *= / > +/* No checking for h (too many blocks) or main (no edges to exit block).= */ > diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-no-xthrow.c b/= gcc/testsuite/c-c++-common/torture/harden-cfr-no-xthrow.c > new file mode 100644 > index 0000000000000..56ed9d5d4d533 > --- /dev/null > +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-no-xthrow.c > @@ -0,0 +1,13 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-noretu= rn-calls=3Dno-xthrow -fdump-tree-hardcfr --param hardcfr-max-blocks=3D9 --p= aram hardcfr-max-inline-blocks=3D5 -ffat-lto-objects -w" } */ > + > +/* Check the instrumentation and the parameters with checking before > + all noreturn calls that aren't expected to throw. */ > + > +#include "harden-cfr.c" > + > +/* Inlined checking thus trap for f. */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 1 "hardcfr" } } */ > +/* Out-of-line checking for g (param), and before both noreturn calls in= main. */ > +/* { dg-final { scan-tree-dump-times "__hardcfr_check" 3 "hardcfr" } } *= / > +/* No checking for h (too many blocks). */ > diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-noret-never.c = b/gcc/testsuite/c-c++-common/torture/harden-cfr-noret-never.c > new file mode 100644 > index 0000000000000..8bd2d13ac18ef > --- /dev/null > +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-noret-never.c > @@ -0,0 +1,18 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-noretu= rn-calls=3Dnever -fdump-tree-hardcfr -ffat-lto-objects" } */ > + > +/* Check that we don't insert checking before noreturn calls. -O0 is te= sted > + separately because h is not found to be noreturn without optimization= , which > + affects codegen for h2, so h2 is omitted here at -O0. */ > + > +#if !__OPTIMIZE__ > +# define OMIT_H2 > +#endif > + > +#include "harden-cfr-noret.c" > + > + > +/* No out-of-line checks. */ > +/* { dg-final { scan-tree-dump-times "hardcfr_check" 0 "hardcfr" } } */ > +/* Only one inline check at the end of f. */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 1 "hardcfr" } } */ > diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-noret-noexcept= .c b/gcc/testsuite/c-c++-common/torture/harden-cfr-noret-noexcept.c > new file mode 100644 > index 0000000000000..a804a6cfe59b7 > --- /dev/null > +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-noret-noexcept.c > @@ -0,0 +1,16 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-noretu= rn-calls=3Dnothrow -fno-exceptions -fdump-tree-hardcfr -ffat-lto-objects" }= */ > + > +/* Check that -fno-exceptions makes for implicit nothrow in noreturn > + handling. */ > + > +#define ATTR_NOTHROW_OPT > + > +#include "harden-cfr-noret.c" > + > +/* One out-of-line check before the noreturn call in f, and another at t= he end > + of f. */ > +/* { dg-final { scan-tree-dump-times "hardcfr_check" 2 "hardcfr" } } */ > +/* One inline check in h, before the noreturn call, and another in h2, b= efore > + or after the call, depending on noreturn detection. */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 2 "hardcfr" } } */ > diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-noret-nothrow.= c b/gcc/testsuite/c-c++-common/torture/harden-cfr-noret-nothrow.c > new file mode 100644 > index 0000000000000..f390cfdbc5930 > --- /dev/null > +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-noret-nothrow.c > @@ -0,0 +1,13 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-noretu= rn-calls=3Dnothrow -fdump-tree-hardcfr -ffat-lto-objects" } */ > + > +/* Check that we insert checking before nothrow noreturn calls. */ > + > +#include "harden-cfr-noret.c" > + > +/* One out-of-line check before the noreturn call in f, and another at t= he end > + of f. */ > +/* { dg-final { scan-tree-dump-times "hardcfr_check" 2 "hardcfr" } } */ > +/* One inline check in h, before the noreturn call, and another in h2, b= efore > + or after the call, depending on noreturn detection. */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 2 "hardcfr" } } */ > diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-noret.c b/gcc/= testsuite/c-c++-common/torture/harden-cfr-noret.c > new file mode 100644 > index 0000000000000..fdd803109a4ae > --- /dev/null > +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-noret.c > @@ -0,0 +1,38 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-noretu= rn-calls=3Dalways -fdump-tree-hardcfr -ffat-lto-objects" } */ > + > +/* Check that we insert checking before all noreturn calls. */ > + > +#ifndef ATTR_NOTHROW_OPT /* Overridden in harden-cfr-noret-noexcept. */ > +#define ATTR_NOTHROW_OPT __attribute__ ((__nothrow__)) > +#endif > + > +extern void __attribute__ ((__noreturn__)) ATTR_NOTHROW_OPT g (void); > + > +void f(int i) { > + if (i) > + /* Out-of-line checks here... */ > + g (); > + /* ... and here. */ > +} > + > +void __attribute__ ((__noinline__, __noclone__)) > +h(void) { > + /* Inline check here. */ > + g (); > +} > + > +#ifndef OMIT_H2 /* from harden-cfr-noret-never. */ > +void h2(void) { > + /* Inline check either here, whether because of noreturn or tail call.= .. */ > + h (); > + /* ... or here, if not optimizing. */ > +} > +#endif > + > +/* One out-of-line check before the noreturn call in f, and another at t= he end > + of f. */ > +/* { dg-final { scan-tree-dump-times "hardcfr_check" 2 "hardcfr" } } */ > +/* One inline check in h, before the noreturn call, and another in h2, b= efore > + or after the call, depending on noreturn detection. */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 2 "hardcfr" } } */ > diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-notail.c b/gcc= /testsuite/c-c++-common/torture/harden-cfr-notail.c > new file mode 100644 > index 0000000000000..6d11487bbba40 > --- /dev/null > +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-notail.c > @@ -0,0 +1,8 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fno-hardcfr-check-exc= eptions -fno-hardcfr-check-returning-calls -fdump-tree-hardcfr -ffat-lto-ob= jects" } */ > + > +#include "harden-cfr-tail.c" > + > +/* Inline checking after the calls, disabling tail calling. */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 5 "hardcfr" } } */ > +/* { dg-final { scan-tree-dump-times "Inserting inline check before stmt= " 0 "hardcfr" } } */ > diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-nothrow.c b/gc= c/testsuite/c-c++-common/torture/harden-cfr-nothrow.c > new file mode 100644 > index 0000000000000..da54fc0b57a51 > --- /dev/null > +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-nothrow.c > @@ -0,0 +1,13 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-noretu= rn-calls=3Dnothrow -fdump-tree-hardcfr --param hardcfr-max-blocks=3D9 --par= am hardcfr-max-inline-blocks=3D5 -ffat-lto-objects -w" } */ > + > +/* Check the instrumentation and the parameters without checking before > + nothrow noreturn calls. */ > + > +#include "harden-cfr.c" > + > +/* Inlined checking thus trap for f. */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 1 "hardcfr" } } */ > +/* Out-of-line checking for g (param), and before both noreturn calls in= main. */ > +/* { dg-final { scan-tree-dump-times "__hardcfr_check" 3 "hardcfr" } } *= / > +/* No checking for h (too many blocks). */ > diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-returning.c b/= gcc/testsuite/c-c++-common/torture/harden-cfr-returning.c > new file mode 100644 > index 0000000000000..550b02ca08816 > --- /dev/null > +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-returning.c > @@ -0,0 +1,35 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-return= ing-calls -fno-exceptions -fdump-tree-hardcfr -ffat-lto-objects" } */ > + > +/* Check that we insert checks before returning calls and alternate path= s, even > + at -O0, because of the explicit command-line flag. */ > + > +void g (void); > +void g2 (void); > +void g3 (void); > + > +void f (int i) { > + if (!i) > + /* Out-of-line checks here... */ > + g (); > + else if (i > 0) > + /* here... */ > + g2 (); > + /* else */ > + /* and in the implicit else here. */ > +} > + > +void f2 (int i) { > + if (!i) > + /* Out-of-line check here... */ > + g (); > + else if (i > 0) > + /* here... */ > + g2 (); > + else > + /* and here. */ > + g3 (); > +} > + > +/* { dg-final { scan-tree-dump-times "hardcfr_check" 6 "hardcfr" } } */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 0 "hardcfr" } } */ > diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-skip-leaf.c b/= gcc/testsuite/c-c++-common/torture/harden-cfr-skip-leaf.c > new file mode 100644 > index 0000000000000..85ecaa04d04cb > --- /dev/null > +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-skip-leaf.c > @@ -0,0 +1,10 @@ > +/* { dg-do run } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-skip-leaf -f= dump-tree-hardcfr -ffat-lto-objects" } */ > + > +/* Test skipping instrumentation of leaf functions. */ > + > +#include "harden-cfr.c" > + > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 0 "hardcfr" } } */ > +/* Only main isn't leaf. */ > +/* { dg-final { scan-tree-dump-times "__hardcfr_check" 2 "hardcfr" } } *= / > diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr-tail.c b/gcc/t= estsuite/c-c++-common/torture/harden-cfr-tail.c > new file mode 100644 > index 0000000000000..d5467eafa9f8e > --- /dev/null > +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr-tail.c > @@ -0,0 +1,52 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-return= ing-calls -fno-hardcfr-check-exceptions -fdump-tree-hardcfr -ffat-lto-objec= ts -Wno-return-type" } */ > + > +/* Check that we insert CFR checking so as to not disrupt tail calls. > + Mandatory tail calls are not available in C, and optimizing calls as = tail > + calls only takes place after hardcfr, so we insert checking before ca= lls > + followed by copies and return stmts with the same return value, that = might > + (or might not) end up optimized to tail calls. */ > + > +extern int g (int i); > + > +int f1(int i) { > + /* Inline check before the returning call. */ > + return g (i); > +} > + > +extern void g2 (int i); > + > +void f2(int i) { > + /* Inline check before the returning call, that ignores the returned v= alue, > + matching the value-less return. */ > + g2 (i); > + return; > +} > + > +void f3(int i) { > + /* Inline check before the returning call. */ > + g (i); > +} > + > +void f4(int i) { > + if (i) > + /* Out-of-line check before the returning call. */ > + return g2 (i); > + /* Out-of-line check before implicit return. */ > +} > + > +int f5(int i) { > + /* Not regarded as a returning call, returning value other than callee= 's > + returned value. */ > + g (i); > + /* Inline check after the non-returning call. */ > + return i; > +} > + > +/* Out-of-line checks in f4, before returning calls and before return. = */ > +/* { dg-final { scan-tree-dump-times "hardcfr_check" 2 "hardcfr" } } */ > +/* Inline checking in all other functions. */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 4 "hardcfr" } } */ > +/* Check before tail-call in all but f5, but f4 is out-of-line. */ > +/* { dg-final { scan-tree-dump-times "Inserting inline check before stmt= " 3 "hardcfr" } } */ > +/* { dg-final { scan-tree-dump-times "Inserting out-of-line check before= stmt" 1 "hardcfr" } } */ > diff --git a/gcc/testsuite/c-c++-common/torture/harden-cfr.c b/gcc/testsu= ite/c-c++-common/torture/harden-cfr.c > new file mode 100644 > index 0000000000000..73824c66f50a5 > --- /dev/null > +++ b/gcc/testsuite/c-c++-common/torture/harden-cfr.c > @@ -0,0 +1,84 @@ > +/* { dg-do run } */ > +/* { dg-options "-fharden-control-flow-redundancy -fdump-tree-hardcfr --= param hardcfr-max-blocks=3D9 --param hardcfr-max-inline-blocks=3D5 -ffat-lt= o-objects" } */ > + > +/* Check the instrumentation and the parameters. */ > + > +int > +f (int i, int j) > +{ > + if (i < j) > + return 2 * i; > + else > + return 3 * j; > +} > + > +int > +g (unsigned i, int j) > +{ > + switch (i) > + { > + case 0: > + return j * 2; > + > + case 1: > + return j * 3; > + > + case 2: > + return j * 5; > + > + default: > + return j * 7; > + } > +} > + > +int > +h (unsigned i, int j) /* { dg-warning "has more than 9 blocks, the reque= sted maximum" } */ > +{ > + switch (i) > + { > + case 0: > + return j * 2; > + > + case 1: > + return j * 3; > + > + case 2: > + return j * 5; > + > + case 3: > + return j * 7; > + > + case 4: > + return j * 11; > + > + case 5: > + return j * 13; > + > + case 6: > + return j * 17; > + > + case 7: > + return j * 19; > + > + default: > + return j * 23; > + } > +} > + > +int > +main (int argc, char *argv[]) > +{ > + if (f (1, 2) !=3D 2 || g (2, 5) !=3D 25 || h (4, 3) !=3D 33 > + || argc < 0) > + __builtin_abort (); > + /* Call exit, instead of returning, to avoid an edge to the exit block= and > + thus implicitly disable hardening of main, when checking before nor= eturn > + calls is disabled. */ > + __builtin_exit (0); > +} > + > +/* Inlined checking thus trap for f. */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 1 "hardcfr" } } */ > +/* Out-of-line checking for g (param), and before both noreturn calls in= main. */ > +/* { dg-final { scan-tree-dump-times "__hardcfr_check" 3 "hardcfr" } } *= / > +/* No checking for h (too many blocks). */ > diff --git a/gcc/testsuite/g++.dg/harden-cfr-throw-always-O0.C b/gcc/test= suite/g++.dg/harden-cfr-throw-always-O0.C > new file mode 100644 > index 0000000000000..e3c109b89c56a > --- /dev/null > +++ b/gcc/testsuite/g++.dg/harden-cfr-throw-always-O0.C > @@ -0,0 +1,13 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-noretu= rn-calls=3Dalways -fdump-tree-hardcfr -ffat-lto-objects -O0" } */ > + > +/* Check that we insert cleanups for checking around the bodies of > + maybe-throwing functions, and also checking before noreturn > + calls. h2 and h2b get an extra resx without ehcleanup. */ > + > +#define NO_OPTIMIZE > + > +#include "torture/harden-cfr-throw.C" > + > +/* { dg-final { scan-tree-dump-times "hardcfr_check" 16 "hardcfr" } } */ > +/* { dg-final { scan-tree-dump-times "builtin_trap" 1 "hardcfr" } } */ > diff --git a/gcc/testsuite/g++.dg/harden-cfr-throw-returning-O0.C b/gcc/t= estsuite/g++.dg/harden-cfr-throw-returning-O0.C > new file mode 100644 > index 0000000000000..207bdb7471a4e > --- /dev/null > +++ b/gcc/testsuite/g++.dg/harden-cfr-throw-returning-O0.C > @@ -0,0 +1,12 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -foptimize-sibling-cal= ls -fdump-tree-hardcfr -O0" } */ > + > +/* -fhardcfr-check-returning-calls gets implicitly disabled because, > + -at O0, -foptimize-sibling-calls has no effect. */ > + > +#define NO_OPTIMIZE > + > +#include "torture/harden-cfr-throw.C" > + > +/* { dg-final { scan-tree-dump-times "hardcfr_check" 12 "hardcfr" } } */ > +/* { dg-final { scan-tree-dump-times "builtin_trap" 1 "hardcfr" } } */ > diff --git a/gcc/testsuite/g++.dg/harden-cfr-throw-returning-enabled-O0.C= b/gcc/testsuite/g++.dg/harden-cfr-throw-returning-enabled-O0.C > new file mode 100644 > index 0000000000000..b2df689c932d6 > --- /dev/null > +++ b/gcc/testsuite/g++.dg/harden-cfr-throw-returning-enabled-O0.C > @@ -0,0 +1,11 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-return= ing-calls -fdump-tree-hardcfr -O0" } */ > + > +/* Explicitly enable -fhardcfr-check-returning-calls -at O0. */ > + > +#include "torture/harden-cfr-throw.C" > + > +/* Same expectations as those in torture/harden-cfr-throw-returning.C. = */ > + > +/* { dg-final { scan-tree-dump-times "hardcfr_check" 10 "hardcfr" } } */ > +/* { dg-final { scan-tree-dump-times "builtin_trap" 2 "hardcfr" } } */ > diff --git a/gcc/testsuite/g++.dg/torture/harden-cfr-noret-always-no-noth= row.C b/gcc/testsuite/g++.dg/torture/harden-cfr-noret-always-no-nothrow.C > new file mode 100644 > index 0000000000000..0d35920c7eedf > --- /dev/null > +++ b/gcc/testsuite/g++.dg/torture/harden-cfr-noret-always-no-nothrow.C > @@ -0,0 +1,16 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-noretu= rn-calls=3Dalways -fdump-tree-hardcfr -ffat-lto-objects" } */ > + > +/* Check that C++ does NOT make for implicit nothrow in noreturn > + handling. */ > + > +#include "harden-cfr-noret-no-nothrow.C" > + > +/* All 3 noreturn calls. */ > +/* { dg-final { scan-tree-dump-times "Bypassing cleanup" 3 "hardcfr" } }= */ > +/* Out-of-line checks in f. */ > +/* { dg-final { scan-tree-dump-times "Inserting out-of-line check in blo= ck \[0-9]*'s edge to exit" 1 "hardcfr" } } */ > +/* { dg-final { scan-tree-dump-times "hardcfr_check" 2 "hardcfr" } } */ > +/* Inline checks in h and h2. */ > +/* { dg-final { scan-tree-dump-times "Inserting inline check before stmt= " 2 "hardcfr" } } */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 2 "hardcfr" } } */ > diff --git a/gcc/testsuite/g++.dg/torture/harden-cfr-noret-never-no-nothr= ow.C b/gcc/testsuite/g++.dg/torture/harden-cfr-noret-never-no-nothrow.C > new file mode 100644 > index 0000000000000..b7d247ff43c77 > --- /dev/null > +++ b/gcc/testsuite/g++.dg/torture/harden-cfr-noret-never-no-nothrow.C > @@ -0,0 +1,18 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-noretu= rn-calls=3Dnever -fdump-tree-hardcfr -ffat-lto-objects" } */ > + > +/* Check that C++ does NOT make for implicit nothrow in noreturn > + handling. Expected results for =3Dnever and =3Dnothrow are the same, > + since the functions are not nothrow. */ > + > +#include "harden-cfr-noret-no-nothrow.C" > + > +/* All 3 noreturn calls. */ > +/* { dg-final { scan-tree-dump-times "Associated cleanup" 3 "hardcfr" } = } */ > +/* Out-of-line checks in f. */ > +/* { dg-final { scan-tree-dump-times "Inserting out-of-line check in blo= ck \[0-9]*'s edge to exit" 1 "hardcfr" } } */ > +/* { dg-final { scan-tree-dump-times "Inserting out-of-line check before= stmt" 1 "hardcfr" } } */ > +/* { dg-final { scan-tree-dump-times "hardcfr_check" 2 "hardcfr" } } */ > +/* Inline checks in h and h2. */ > +/* { dg-final { scan-tree-dump-times "Inserting inline check before stmt= " 2 "hardcfr" } } */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 2 "hardcfr" } } */ > diff --git a/gcc/testsuite/g++.dg/torture/harden-cfr-noret-no-nothrow.C b= /gcc/testsuite/g++.dg/torture/harden-cfr-noret-no-nothrow.C > new file mode 100644 > index 0000000000000..62c58cfd406d4 > --- /dev/null > +++ b/gcc/testsuite/g++.dg/torture/harden-cfr-noret-no-nothrow.C > @@ -0,0 +1,23 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-noretu= rn-calls=3Dnothrow -fdump-tree-hardcfr -ffat-lto-objects" } */ > + > +/* Check that C++ does NOT make for implicit nothrow in noreturn > + handling. */ > + > +#define ATTR_NOTHROW_OPT > + > +#if ! __OPTIMIZE__ > +void __attribute__ ((__noreturn__)) h (void); > +#endif > + > +#include "../../c-c++-common/torture/harden-cfr-noret.c" > + > +/* All 3 noreturn calls. */ > +/* { dg-final { scan-tree-dump-times "Associated cleanup" 3 "hardcfr" } = } */ > +/* Out-of-line checks in f. */ > +/* { dg-final { scan-tree-dump-times "Inserting out-of-line check in blo= ck \[0-9]*'s edge to exit" 1 "hardcfr" } } */ > +/* { dg-final { scan-tree-dump-times "Inserting out-of-line check before= stmt" 1 "hardcfr" } } */ > +/* { dg-final { scan-tree-dump-times "hardcfr_check" 2 "hardcfr" } } */ > +/* Inline checks in h and h2. */ > +/* { dg-final { scan-tree-dump-times "Inserting inline check before stmt= " 2 "hardcfr" } } */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 2 "hardcfr" } } */ > diff --git a/gcc/testsuite/g++.dg/torture/harden-cfr-throw-always.C b/gcc= /testsuite/g++.dg/torture/harden-cfr-throw-always.C > new file mode 100644 > index 0000000000000..4d303e769ef72 > --- /dev/null > +++ b/gcc/testsuite/g++.dg/torture/harden-cfr-throw-always.C > @@ -0,0 +1,13 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fno-hardcfr-check-ret= urning-calls -fhardcfr-check-noreturn-calls=3Dalways -fdump-tree-hardcfr -f= fat-lto-objects" } */ > + > +/* Check that we insert cleanups for checking around the bodies of > + maybe-throwing functions, and also checking before noreturn > + calls. */ > + > +#include "harden-cfr-throw.C" > + > +/* { dg-final { scan-tree-dump-times "hardcfr_check" 14 "hardcfr" } } */ > +/* { dg-final { scan-tree-dump-times "builtin_trap" 1 "hardcfr" } } */ > +/* h, h2, h2b, and h4. */ > +/* { dg-final { scan-tree-dump-times "Bypassing" 4 "hardcfr" } } */ > diff --git a/gcc/testsuite/g++.dg/torture/harden-cfr-throw-never.C b/gcc/= testsuite/g++.dg/torture/harden-cfr-throw-never.C > new file mode 100644 > index 0000000000000..81c1b1abae6e9 > --- /dev/null > +++ b/gcc/testsuite/g++.dg/torture/harden-cfr-throw-never.C > @@ -0,0 +1,12 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fno-hardcfr-check-ret= urning-calls -fhardcfr-check-noreturn-calls=3Dnever -fdump-tree-hardcfr -ff= at-lto-objects" } */ > + > +/* Check that we insert cleanups for checking around the bodies of > + maybe-throwing functions, without checking before noreturn > + calls. */ > + > +#include "harden-cfr-throw.C" > + > +/* { dg-final { scan-tree-dump-times "hardcfr_check" 12 "hardcfr" } } */ > +/* { dg-final { scan-tree-dump-times "builtin_trap" 1 "hardcfr" } } */ > +/* { dg-final { scan-tree-dump-times "Bypassing" 0 "hardcfr" } } */ > diff --git a/gcc/testsuite/g++.dg/torture/harden-cfr-throw-no-xthrow-expe= cted.C b/gcc/testsuite/g++.dg/torture/harden-cfr-throw-no-xthrow-expected.C > new file mode 100644 > index 0000000000000..de37b2ab1c5ca > --- /dev/null > +++ b/gcc/testsuite/g++.dg/torture/harden-cfr-throw-no-xthrow-expected.C > @@ -0,0 +1,16 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fno-hardcfr-check-ret= urning-calls -fhardcfr-check-noreturn-calls=3Dno-xthrow -fdump-tree-hardcfr= -ffat-lto-objects" } */ > + > +/* Check that we insert cleanups for checking around the bodies of > + maybe-throwing functions, and also checking before noreturn > + calls. */ > + > +extern void __attribute__ ((__noreturn__, __expected_throw__)) g (void); > +extern void __attribute__ ((__noreturn__, __expected_throw__)) g2 (void)= ; > + > +#include "harden-cfr-throw.C" > + > +/* In f and h3, there are checkpoints at return and exception escape. .= */ > +/* { dg-final { scan-tree-dump-times "hardcfr_check" 4 "hardcfr" } } */ > +/* Other functions get a single cleanup checkpoint. */ > +/* { dg-final { scan-tree-dump-times "builtin_trap" 5 "hardcfr" } } */ > diff --git a/gcc/testsuite/g++.dg/torture/harden-cfr-throw-no-xthrow.C b/= gcc/testsuite/g++.dg/torture/harden-cfr-throw-no-xthrow.C > new file mode 100644 > index 0000000000000..720498b4bbcb0 > --- /dev/null > +++ b/gcc/testsuite/g++.dg/torture/harden-cfr-throw-no-xthrow.C > @@ -0,0 +1,12 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fno-hardcfr-check-ret= urning-calls -fhardcfr-check-noreturn-calls=3Dno-xthrow -fdump-tree-hardcfr= -ffat-lto-objects" } */ > + > +/* Check that we insert cleanups for checking around the bodies of > + maybe-throwing functions, and also checking before noreturn > + calls. */ > + > +#include "harden-cfr-throw.C" > + > +/* { dg-final { scan-tree-dump-times "hardcfr_check" 12 "hardcfr" } } */ > +/* { dg-final { scan-tree-dump-times "builtin_trap" 1 "hardcfr" } } */ > +/* { dg-final { scan-tree-dump-times "Bypassing" 0 "hardcfr" } } */ > diff --git a/gcc/testsuite/g++.dg/torture/harden-cfr-throw-nocleanup.C b/= gcc/testsuite/g++.dg/torture/harden-cfr-throw-nocleanup.C > new file mode 100644 > index 0000000000000..9f359363d177c > --- /dev/null > +++ b/gcc/testsuite/g++.dg/torture/harden-cfr-throw-nocleanup.C > @@ -0,0 +1,11 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-noretu= rn-calls=3Dnever -fno-hardcfr-check-exceptions -fno-hardcfr-check-returnin= g-calls -fdump-tree-hardcfr -ffat-lto-objects" } */ > + > +/* Check that we do not insert cleanups for checking around the bodies > + of maybe-throwing functions. h4 doesn't get any checks, because we > + don't have noreturn checking enabled. */ > + > +#include "harden-cfr-throw.C" > + > +/* { dg-final { scan-tree-dump-times "hardcfr_check" 0 "hardcfr" } } */ > +/* { dg-final { scan-tree-dump-times "builtin_trap" 6 "hardcfr" } } */ > diff --git a/gcc/testsuite/g++.dg/torture/harden-cfr-throw-nothrow.C b/gc= c/testsuite/g++.dg/torture/harden-cfr-throw-nothrow.C > new file mode 100644 > index 0000000000000..e1c2e8d73bb75 > --- /dev/null > +++ b/gcc/testsuite/g++.dg/torture/harden-cfr-throw-nothrow.C > @@ -0,0 +1,11 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fno-hardcfr-check-ret= urning-calls -fhardcfr-check-noreturn-calls=3Dnothrow -fdump-tree-hardcfr -= ffat-lto-objects" } */ > + > +/* Check that we insert cleanups for checking around the bodies of > + maybe-throwing functions, without checking before noreturn > + calls. */ > + > +#include "harden-cfr-throw.C" > + > +/* { dg-final { scan-tree-dump-times "hardcfr_check" 12 "hardcfr" } } */ > +/* { dg-final { scan-tree-dump-times "builtin_trap" 1 "hardcfr" } } */ > diff --git a/gcc/testsuite/g++.dg/torture/harden-cfr-throw-returning.C b/= gcc/testsuite/g++.dg/torture/harden-cfr-throw-returning.C > new file mode 100644 > index 0000000000000..37e4551d09666 > --- /dev/null > +++ b/gcc/testsuite/g++.dg/torture/harden-cfr-throw-returning.C > @@ -0,0 +1,31 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-noretu= rn-calls=3Dnever -foptimize-sibling-calls -fdump-tree-hardcfr -ffat-lto-obj= ects" } */ > + > +/* Check that we insert cleanups for checking around the bodies of > + maybe-throwing functions. These results depend on checking before > + returning calls, which is only enabled when sibcall optimizations > + are enabled, so change the optimization mode to -O1 for f and f2, > + so that -foptimize-sibling-calls can take effect and enable > + -fhardcfr-check-returning-calls, so that we get the same results. > + There is a separate test for -O0. */ > + > +#if ! __OPTIMIZE__ > +void __attribute__ ((__optimize__ (1, "-foptimize-sibling-calls"))) f(in= t i); > +void __attribute__ ((__optimize__ (1, "-foptimize-sibling-calls"))) f2(i= nt i); > +void __attribute__ ((__optimize__ (1, "-foptimize-sibling-calls"))) h3(v= oid); > +#endif > + > +#include "harden-cfr-throw.C" > + > +/* f gets out-of-line checks before the unwrapped tail call and in the > + else edge. */ > +/* f2 gets out-of-line checks before both unwrapped tail calls. */ > +/* h gets out-of-line checks before the implicit return and in the > + cleanup block. */ > +/* h2 and h2b get out-of-line checks before the cleanup returning > + call, and in the cleanup block. */ > +/* h3 gets an inline check before the __cxa_end_catch returning call. *= / > +/* h4 gets an inline check in the cleanup block. */ > + > +/* { dg-final { scan-tree-dump-times "hardcfr_check" 10 "hardcfr" } } */ > +/* { dg-final { scan-tree-dump-times "builtin_trap" 2 "hardcfr" } } */ > diff --git a/gcc/testsuite/g++.dg/torture/harden-cfr-throw.C b/gcc/testsu= ite/g++.dg/torture/harden-cfr-throw.C > new file mode 100644 > index 0000000000000..8e46b900cd263 > --- /dev/null > +++ b/gcc/testsuite/g++.dg/torture/harden-cfr-throw.C > @@ -0,0 +1,73 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fno-hardcfr-check-ret= urning-calls -fdump-tree-hardcfr -ffat-lto-objects" } */ > + > +#if ! __OPTIMIZE__ && ! defined NO_OPTIMIZE > +/* Without optimization, functions with cleanups end up with an extra > + resx that is not optimized out, so arrange to optimize them. */ > +void __attribute__ ((__optimize__ (1))) h2(void); > +void __attribute__ ((__optimize__ (1))) h2b(void); > +#endif > + > +/* Check that we insert cleanups for checking around the bodies of > + maybe-throwing functions. */ > + > +extern void g (void); > +extern void g2 (void); > + > +void f(int i) { > + if (i) > + g (); > + /* Out-of-line checks here, and in the implicit handler. */ > +} > + > +void f2(int i) { > + if (i) > + g (); > + else > + g2 (); > + /* Out-of-line checks here, and in the implicit handler. */ > +} > + > +void h(void) { > + try { > + g (); > + } catch (...) { > + throw; > + } > + /* Out-of-line checks here, and in the implicit handler. */ > +} > + > +struct needs_cleanup { > + ~needs_cleanup(); > +}; > + > +void h2(void) { > + needs_cleanup y; /* No check in the cleanup handler. */ > + g(); > + /* Out-of-line checks here, and in the implicit handler. */ > +} > + > +extern void __attribute__ ((__nothrow__)) another_cleanup (void*); > + > +void h2b(void) { > + int x __attribute__ ((cleanup (another_cleanup))); > + g(); > + /* Out-of-line checks here, and in the implicit handler. */ > +} > + > +void h3(void) { > + try { > + throw 1; > + } catch (...) { > + } > + /* Out-of-line checks here, and in the implicit handler. */ > +} > + > +void h4(void) { > + throw 1; > + /* Inline check in the cleanup around the __cxa_throw noreturn call. = */ > +} > + > +/* { dg-final { scan-tree-dump-times "hardcfr_check" 12 "hardcfr" } } */ > +/* { dg-final { scan-tree-dump-times "builtin_trap" 1 "hardcfr" } } */ > +/* { dg-final { scan-tree-dump-times "Bypassing" 0 "hardcfr" } } */ > diff --git a/gcc/testsuite/gcc.dg/torture/harden-cfr-noret-no-nothrow.c b= /gcc/testsuite/gcc.dg/torture/harden-cfr-noret-no-nothrow.c > new file mode 100644 > index 0000000000000..8e4ee1fab08cb > --- /dev/null > +++ b/gcc/testsuite/gcc.dg/torture/harden-cfr-noret-no-nothrow.c > @@ -0,0 +1,15 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-noretu= rn-calls=3Dnothrow -fdump-tree-hardcfr -ffat-lto-objects" } */ > + > +/* Check that C makes for implicit nothrow in noreturn handling. */ > + > +#define ATTR_NOTHROW_OPT > + > +#include "../../c-c++-common/torture/harden-cfr-noret.c" > + > +/* One out-of-line check before the noreturn call in f, and another at t= he end > + of f. */ > +/* { dg-final { scan-tree-dump-times "hardcfr_check" 2 "hardcfr" } } */ > +/* One inline check in h, before the noreturn call, and another in h2, b= efore > + or after the call, depending on noreturn detection. */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 2 "hardcfr" } } */ > diff --git a/gcc/testsuite/gcc.dg/torture/harden-cfr-tail-ub.c b/gcc/test= suite/gcc.dg/torture/harden-cfr-tail-ub.c > new file mode 100644 > index 0000000000000..634d98f1ffca4 > --- /dev/null > +++ b/gcc/testsuite/gcc.dg/torture/harden-cfr-tail-ub.c > @@ -0,0 +1,40 @@ > +/* { dg-do compile } */ > +/* { dg-options "-fharden-control-flow-redundancy -fhardcfr-check-return= ing-calls -fno-hardcfr-check-exceptions -fdump-tree-hardcfr -ffat-lto-objec= ts -Wno-return-type" } */ > + > +/* In C only, check some additional cases (comparing with > + c-c++-common/torture/harden-cfr-tail.c) of falling off the end of non= -void > + function. C++ would issue an unreachable call in these cases. */ > + > +extern int g (int i); > + > +int f1(int i) { > + /* Inline check before the returning call, that doesn't return anythin= g. */ > + g (i); > + /* Implicit return without value, despite the return type; this combin= ation > + enables tail-calling of g, and is recognized as a returning call. = */ > +} > + > +extern void g2 (int i); > + > +int f2(int i) { > + /* Inline check before the returning call, that disregards its return > + value. */ > + g2 (i); > + /* Implicit return without value, despite the return type; this combin= ation > + enables tail-calling of g2, and is recognized as a returning call. = */ > +} > + > +int f3(int i) { > + if (i) > + /* Out-of-line check before the returning call. */ > + return g (i); > + /* Out-of-line check before implicit return. */ > +} > + > +/* Out-of-line checks in f3, before returning calls and before return. = */ > +/* { dg-final { scan-tree-dump-times "hardcfr_check" 2 "hardcfr" } } */ > +/* Inline checking in all other functions. */ > +/* { dg-final { scan-tree-dump-times "__builtin_trap" 2 "hardcfr" } } */ > +/* Check before tail-call in all functions, but f3 is out-of-line. */ > +/* { dg-final { scan-tree-dump-times "Inserting inline check before stmt= " 2 "hardcfr" } } */ > +/* { dg-final { scan-tree-dump-times "Inserting out-of-line check before= stmt" 1 "hardcfr" } } */ > diff --git a/gcc/testsuite/gnat.dg/hardcfr.adb b/gcc/testsuite/gnat.dg/ha= rdcfr.adb > new file mode 100644 > index 0000000000000..abe1605c029fa > --- /dev/null > +++ b/gcc/testsuite/gnat.dg/hardcfr.adb > @@ -0,0 +1,76 @@ > +-- { dg-do run } > +-- { dg-options "-fharden-control-flow-redundancy -fno-hardcfr-check-ex= ceptions -fdump-tree-hardcfr --param=3Dhardcfr-max-blocks=3D22 --param=3Dha= rdcfr-max-inline-blocks=3D12 -O0" } > + > +procedure HardCFR is > + function F (I, J : Integer) return Integer is > + begin > + if (I < J) then > + return 2 * I; > + else > + return 3 * J; > + end if; > + end F; > + > + function G (I : Natural; J : Integer) return Integer is > + begin > + case I is > + when 0 =3D> > + return J * 2; > + > + when 1 =3D> > + return J * 3; > + > + when 2 =3D> > + return J * 5; > + > + when others =3D> > + return J * 7; > + end case; > + end G; > + > + function H (I : Natural; -- { dg-warning "has more than 22 blocks, th= e requested maximum" } > + J : Integer) > + return Integer is > + begin > + case I is > + when 0 =3D> > + return J * 2; > + > + when 1 =3D> > + return J * 3; > + > + when 2 =3D> > + return J * 5; > + > + when 3 =3D> > + return J * 7; > + > + when 4 =3D> > + return J * 11; > + > + when 5 =3D> > + return J * 13; > + > + when 6 =3D> > + return J * 17; > + > + when 7 =3D> > + return J * 19; > + > + when others =3D> > + return J * 23; > + end case; > + end H; > +begin > + if (F (1, 2) /=3D 2 or else F (3, 2) /=3D 6 > + or else G (2, 5) /=3D 25 or else H (4, 3) /=3D 33) > + then > + raise Program_Error; > + end if; > +end HardCFR; > + > +-- HardCFR and HardCFR.F: > +-- { dg-final { scan-tree-dump-times ".builtin_trap" 2 "hardcfr" } } > + > +-- This is __builtin___hardcfr_check in HardCFR.G: > +-- { dg-final { scan-tree-dump-times ".builtin " 1 "hardcfr" } } > diff --git a/gcc/tree-core.h b/gcc/tree-core.h > index 77417dbd658b4..2c89b655691b1 100644 > --- a/gcc/tree-core.h > +++ b/gcc/tree-core.h > @@ -95,6 +95,9 @@ struct die_struct; > /* Nonzero if this is a cold function. */ > #define ECF_COLD (1 << 15) > > +/* Nonzero if this is a function expected to end with an exception. */ > +#define ECF_XTHROW (1 << 16) > + > /* Call argument flags. */ > > /* Nonzero if the argument is not used by the function. */ > diff --git a/gcc/tree-pass.h b/gcc/tree-pass.h > index 79a5f330274d8..09e6ada5b2f91 100644 > --- a/gcc/tree-pass.h > +++ b/gcc/tree-pass.h > @@ -657,6 +657,8 @@ extern gimple_opt_pass *make_pass_gimple_isel (gcc::c= ontext *ctxt); > extern gimple_opt_pass *make_pass_harden_compares (gcc::context *ctxt); > extern gimple_opt_pass *make_pass_harden_conditional_branches (gcc::cont= ext > *ctxt); > +extern gimple_opt_pass *make_pass_harden_control_flow_redundancy (gcc::c= ontext > + *ctxt); > > /* Current optimization pass. */ > extern opt_pass *current_pass; > diff --git a/gcc/tree.cc b/gcc/tree.cc > index 69369c6c3eeeb..f7bfd9e3451b3 100644 > --- a/gcc/tree.cc > +++ b/gcc/tree.cc > @@ -9748,6 +9748,10 @@ set_call_expr_flags (tree decl, int flags) > DECL_ATTRIBUTES (decl)); > if ((flags & ECF_TM_PURE) && flag_tm) > apply_tm_attr (decl, get_identifier ("transaction_pure")); > + if ((flags & ECF_XTHROW)) > + DECL_ATTRIBUTES (decl) > + =3D tree_cons (get_identifier ("expected_throw"), > + NULL, DECL_ATTRIBUTES (decl)); > /* Looping const or pure is implied by noreturn. > There is currently no way to declare looping const or looping pure = alone. */ > gcc_assert (!(flags & ECF_LOOPING_CONST_OR_PURE) > @@ -9960,7 +9964,8 @@ build_common_builtin_nodes (void) > ftype =3D build_function_type_list (void_type_node, NULL_TREE); > local_define_builtin ("__builtin_cxa_end_cleanup", ftype, > BUILT_IN_CXA_END_CLEANUP, > - "__cxa_end_cleanup", ECF_NORETURN | ECF_LEAF)= ; > + "__cxa_end_cleanup", > + ECF_NORETURN | ECF_XTHROW | ECF_LEAF); > } > > ftype =3D build_function_type_list (void_type_node, ptr_type_node, NUL= L_TREE); > @@ -9969,7 +9974,7 @@ build_common_builtin_nodes (void) > ((targetm_common.except_unwind_info (&global_opti= ons) > =3D=3D UI_SJLJ) > ? "_Unwind_SjLj_Resume" : "_Unwind_Resume"), > - ECF_NORETURN); > + ECF_NORETURN | ECF_XTHROW); > > if (builtin_decl_explicit (BUILT_IN_RETURN_ADDRESS) =3D=3D NULL_TREE) > { > diff --git a/libgcc/Makefile.in b/libgcc/Makefile.in > index 7ee8b5f9bcba4..8dedd10f79a30 100644 > --- a/libgcc/Makefile.in > +++ b/libgcc/Makefile.in > @@ -430,6 +430,9 @@ endif > > LIB2ADD +=3D enable-execute-stack.c > > +# Control Flow Redundancy hardening out-of-line checker. > +LIB2ADD +=3D $(srcdir)/hardcfr.c > + > # While emutls.c has nothing to do with EH, it is in LIB2ADDEH* > # instead of LIB2ADD because that's the way to be sure on some targets > # (e.g. *-*-darwin*) only one copy of it is linked. > diff --git a/libgcc/hardcfr.c b/libgcc/hardcfr.c > new file mode 100644 > index 0000000000000..7496095b8666c > --- /dev/null > +++ b/libgcc/hardcfr.c > @@ -0,0 +1,300 @@ > +/* Control flow redundancy hardening > + Copyright (C) 2022 Free Software Foundation, Inc. > + Contributed by Alexandre Oliva > + > +This file is part of GCC. > + > +GCC is free software; you can redistribute it and/or modify it under > +the terms of the GNU General Public License as published by the Free > +Software Foundation; either version 3, or (at your option) any later > +version. > + > +GCC is distributed in the hope that it will be useful, but WITHOUT ANY > +WARRANTY; without even the implied warranty of MERCHANTABILITY or > +FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License > +for more details. > + > +Under Section 7 of GPL version 3, you are granted additional > +permissions described in the GCC Runtime Library Exception, version > +3.1, as published by the Free Software Foundation. > + > +You should have received a copy of the GNU General Public License and > +a copy of the GCC Runtime Library Exception along with this program; > +see the files COPYING3 and COPYING.RUNTIME respectively. If not, see > +. */ > + > +/* Avoid infinite recursion. */ > +#pragma GCC optimize ("-fno-harden-control-flow-redundancy") > + > +#include > +#include > + > +/* This should be kept in sync with gcc/gimple-harden-control-flow.cc. = */ > +#if __CHAR_BIT__ >=3D 28 > +# define VWORDmode __QI__ > +#elif __CHAR_BIT__ >=3D 14 > +# define VWORDmode __HI__ > +#else > +# define VWORDmode __SI__ > +#endif > + > +typedef unsigned int __attribute__ ((__mode__ (VWORDmode))) vword; > + > +/* This function is optionally called at the end of a function to verify= that > + the VISITED array represents a sensible execution path in the CFG. I= t is > + always expected to pass; the purpose is to detect attempts to subvert > + execution by taking unexpected paths, or other execution errors. The > + function, instrumented by pass_harden_control_flow_redundancy at a ti= me in > + which it had BLOCKS basic blocks (not counting ENTER and EXIT, so blo= ck 2 > + maps to index 0, the first bit of the first VWORD), sets a bit in the= bit > + array VISITED as it enters the corresponding basic block. CFG holds = a > + representation of the control flow graph at the time of the instrumen= tation: > + an array of VWORDs holding, for each block, a sequence of predecessor= s, and > + a sequence of successors. Each pred and succ sequence is represented= as a > + sequence of pairs (mask, index), terminated by an index-less all-zero= mask. > + If the bit corresponding to the block is set, then at least one of th= e pred > + masks, and at least one of the succ masks, must have a bit set in > + VISITED[index]. An ENTRY block predecessor and an EXIT block success= or are > + represented in a (mask, index) pair that tests the block's own bit. = */ > +extern void __hardcfr_check (size_t blocks, > + vword const *visited, > + vword const *cfg); > + > +/* Compute the MASK for the bit representing BLOCK in WORDIDX's vword in= a > + visited blocks bit array. */ > +static inline void > +block2mask (size_t const block, vword *const mask, size_t *const wordidx= ) > +{ > + size_t wbits =3D __CHAR_BIT__ * sizeof (vword); > + *wordidx =3D block / wbits; > + *mask =3D (vword)1 << (block % wbits); > +} > + > +/* Check whether the bit corresponding to BLOCK is set in VISITED. */ > +static inline bool > +visited_p (size_t const block, vword const *const visited) > +{ > + vword mask; > + size_t wordidx; > + block2mask (block, &mask, &wordidx); > + vword w =3D visited[wordidx]; > + return (w & mask) !=3D 0; > +} > + > +/* Check whether any VISITED bits that would correspond to blocks after = BLOCKS > + are set. */ > +static inline bool > +excess_bits_set_p (size_t const blocks, vword const *const visited) > +{ > + vword mask; > + size_t wordidx; > + block2mask (blocks - 1, &mask, &wordidx); > + mask =3D -mask - mask; > + vword w =3D visited[wordidx]; > + return (w & mask) !=3D 0; > +} > + > +/* Read and consume a mask from **CFG_IT. (Consume meaning advancing th= e > + iterator to the next word). If the mask is zero, return FALSE. Othe= rwise, > + also read and consume an index, and set *MASK and/or *WORDIDX, whiche= ver are > + nonNULL, to the corresponding read values, and finally return TRUE. = */ > +static inline bool > +next_pair (vword const **const cfg_it, > + vword *const mask, > + size_t *const wordidx) > +{ > + vword m =3D **cfg_it; > + ++*cfg_it; > + if (!m) > + return false; > + > + if (mask) > + *mask =3D m; > + > + size_t word =3D **cfg_it; > + ++*cfg_it; > + > + if (wordidx) > + *wordidx =3D word; > + > + return true; > +} > + > +/* Return TRUE iff any of the bits in MASK is set in VISITED[WORDIDX]. = */ > +static inline bool > +test_mask (vword const *const visited, > + vword const mask, size_t const wordidx) > +{ > + return (visited[wordidx] & mask) !=3D 0; > +} > + > +/* Scan a sequence of pairs (mask, index) at **CFG_IT until its terminat= or is > + reached and consumed. */ > +static inline void > +consume_seq (vword const **const cfg_it) > +{ > + while (next_pair (cfg_it, NULL, NULL)) > + /* Do nothing. */; > +} > + > +/* Check that at least one of the MASK bits in a sequence of pairs (mask= , > + index) at **CFG_IT is set in the corresponding VISITED[INDEX] word. = Trap if > + we reach the terminator without finding any. Consume the entire sequ= ence > + otherwise, so that *CFG_IT points just past the terminator, which may= be the > + beginning of the next sequence. */ > +static inline bool > +check_seq (vword const *const visited, vword const **const cfg_it) > +{ > + vword mask; > + size_t wordidx; > + > + /* If the block was visited, check that at least one of the > + preds/succs was also visited. */ > + do > + /* If we get to the end of the sequence without finding any > + match, something is amiss. */ > + if (!next_pair (cfg_it, &mask, &wordidx)) > + return false; > + /* Keep searching until we find a match, at which point the > + condition is satisfied. */ > + while (!test_mask (visited, mask, wordidx)); > + > + /* Consume the remaining entries in the sequence, whether we found a m= atch or > + skipped the block, so as to position the iterator at the beginning = of the > + next . */ > + consume_seq (cfg_it); > + > + return true; > +} > + > +/* Print out the CFG with BLOCKS blocks, presumed to be associated with = CALLER. > + This is expected to be optimized out entirely, unless the verbose par= t of > + __hardcfr_check_fail is enabled. */ > +static inline void > +__hardcfr_debug_cfg (size_t const blocks, > + void const *const caller, > + vword const *const cfg) > +{ > + __builtin_printf ("CFG at %p, for %p", cfg, caller); > + vword const *cfg_it =3D cfg; > + for (size_t i =3D 0; i < blocks; i++) > + { > + vword mask; size_t wordidx; > + block2mask (i, &mask, &wordidx); > + __builtin_printf ("\nblock %lu (%lu/0x%lx)\npreds: ", > + (unsigned long)i, > + (unsigned long)wordidx, (unsigned long)mask); > + while (next_pair (&cfg_it, &mask, &wordidx)) > + __builtin_printf (" (%lu/0x%lx)", > + (unsigned long)wordidx, (unsigned long)mask); > + __builtin_printf ("\nsuccs: "); > + while (next_pair (&cfg_it, &mask, &wordidx)) > + __builtin_printf (" (%lu/0x%lx)", > + (unsigned long)wordidx, (unsigned long)mask); > + } > + __builtin_printf ("\n"); > +} > + > +#ifndef ATTRIBUTE_UNUSED > +# define ATTRIBUTE_UNUSED __attribute__ ((__unused__)) > +#endif > + > +/* This is called when an out-of-line hardcfr check fails. All the argu= ments > + are ignored, and it just traps, unless HARDCFR_VERBOSE_FAIL is enable= d. IF > + it is, it prints the PART of the CFG, expected to have BLOCKS blocks,= that > + failed at CALLER's BLOCK, and the VISITED bitmap. When the verbose m= ode is > + enabled, it also forces __hardcfr_debug_cfg (above) to be compiled in= to an > + out-of-line function, that could be called from a debugger. > + */ > +static inline void > +__hardcfr_check_fail (size_t const blocks ATTRIBUTE_UNUSED, > + vword const *const visited ATTRIBUTE_UNUSED, > + vword const *const cfg ATTRIBUTE_UNUSED, > + size_t const block ATTRIBUTE_UNUSED, > + int const part ATTRIBUTE_UNUSED, > + void const *const caller ATTRIBUTE_UNUSED) > +{ > +#if HARDCFR_VERBOSE_FAIL > + static const char *parts[] =3D { "preds", "succs", "no excess" }; > + > + vword mask; size_t wordidx; > + block2mask (block, &mask, &wordidx); > + if (part =3D=3D 2) > + mask =3D -mask - mask; > + __builtin_printf ("hardcfr fail at %p block %lu (%lu/0x%lx), expected = %s:", > + caller, (unsigned long)block, > + (unsigned long)wordidx, (unsigned long)mask, > + parts[part]); > + > + if (part !=3D 2) > + { > + /* Skip data for previous blocks. */ > + vword const *cfg_it =3D cfg; > + for (size_t i =3D block; i--; ) > + { > + consume_seq (&cfg_it); > + consume_seq (&cfg_it); > + } > + for (size_t i =3D part; i--; ) > + consume_seq (&cfg_it); > + > + while (next_pair (&cfg_it, &mask, &wordidx)) > + __builtin_printf (" (%lu/0x%lx)", > + (unsigned long)wordidx, (unsigned long)mask); > + } > + > + __builtin_printf ("\nvisited:"); > + block2mask (blocks - 1, &mask, &wordidx); > + for (size_t i =3D 0; i <=3D wordidx; i++) > + __builtin_printf (" (%lu/0x%lx)", > + (unsigned long)i, (unsigned long)visited[i]); > + __builtin_printf ("\n"); > + > + /* Reference __hardcfr_debug_cfg so that it's output out-of-line, so t= hat it > + can be called from a debugger. */ > + if (!caller || caller =3D=3D __hardcfr_debug_cfg) > + return; > +#endif > + __builtin_trap (); > +} > + > +/* Check that, for each of the BLOCKS basic blocks, if its bit is set in > + VISITED, at least one of its predecessors in CFG is also set, and at = also > + that at least one of its successors in CFG is also set. */ > +void > +__hardcfr_check (size_t const blocks, > + vword const *const visited, > + vword const *const cfg) > +{ > + vword const *cfg_it =3D cfg; > + for (size_t i =3D 0; i < blocks; i++) > + { > + bool v =3D visited_p (i, visited); > + > + /* For each block, there are two sequences of pairs (mask, index),= each > + sequence terminated by a single all-zero mask (no index). The f= irst > + sequence is for predecessor blocks, the second is for successors= . At > + least one of each must be set. */ > + if (!v) > + { > + /* Consume predecessors. */ > + consume_seq (&cfg_it); > + /* Consume successors. */ > + consume_seq (&cfg_it); > + } > + else > + { > + /* Check predecessors. */ > + if (!check_seq (visited, &cfg_it)) > + __hardcfr_check_fail (blocks, visited, cfg, i, 0, > + __builtin_return_address (0)); > + /* Check successors. */ > + if (!check_seq (visited, &cfg_it)) > + __hardcfr_check_fail (blocks, visited, cfg, i, 1, > + __builtin_return_address (0)); > + } > + } > + if (excess_bits_set_p (blocks, visited)) > + __hardcfr_check_fail (blocks, visited, cfg, blocks - 1, 2, > + __builtin_return_address (0)); > +} > > > -- > Alexandre Oliva, happy hacker https://FSFLA.org/blogs/lxo/ > Free Software Activist GNU Toolchain Engineer > More tolerance and less prejudice are key for inclusion and diversity > Excluding neuro-others for not behaving ""normal"" is *not* inclusive