From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by sourceware.org (Postfix) with ESMTPS id 9F9873858C52 for ; Fri, 3 Feb 2023 20:57:06 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 9F9873858C52 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1675457826; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=KBUfVTlgWI3+MSAgQFKF34s5OoKfE/+zxV5p0IfDsTk=; b=EUq0h5Ki8g8XoBks2SgtyKMFgkUQ8Gq/aBn5+gdChh2yFbi5X50as7oEIZmj1EsFw9tS8o CkKgLhzt8YLthFMJMpVXl9eYci+0HnSWzp9ElZWLFQucxCi6ne2p6pGsNu0SIiHqM2MT2x 7wqHc6S378VPqP9rvfAs7QStPJVPeJQ= Received: from mail-qt1-f200.google.com (mail-qt1-f200.google.com [209.85.160.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-460-GUHsoKVkMEqHgw79PT1Y3A-1; Fri, 03 Feb 2023 15:57:05 -0500 X-MC-Unique: GUHsoKVkMEqHgw79PT1Y3A-1 Received: by mail-qt1-f200.google.com with SMTP id f2-20020ac80682000000b003b6364059d2so3222002qth.9 for ; Fri, 03 Feb 2023 12:57:05 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=mime-version:references:message-id:in-reply-to:subject:cc:to:date :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KBUfVTlgWI3+MSAgQFKF34s5OoKfE/+zxV5p0IfDsTk=; b=Km3f/kaTAzcCiTsZ6+f02/2OVcK8eFpLBP9V/yWsfpGtbw7+pWNaAaAAPH8KyS/T1R ICOzAu7IlHUFdUOEX2IVipMIqRqoisyhcHGWBB1RvCih92F1Sq1W9rmlnUADHueeWPmv p4reVmBim34dzi0nXQDYuzzWaknpJUcxgyaKvojsVSiP4m5fg2LBaeK91D9+Dm+BaWRC pNpATzur7UnzbYM+8B6FFK+BNhzLX0AwmsZL7mmR+p0kB9M26GbT7h53bl+LTPCE9cpm 9P1KU1R/fCJBmj+F8UrqfocuvnBYCe9bj5S8T+eYa3Go2Zbq6MgIcyIM96Gf8NbO286i 0qeA== X-Gm-Message-State: AO0yUKV09Vb1Amg2kCo4yH7QvO/CtUkRTrCbmms1ErAuisMJVRd4BDUj fqtszdb8v5a0uNrGtMh246fOa/U4hD/XlSX3pp6iZ/K7gc8H8U2ZpVC35FkPuBW1ZOIb4NQ/QIZ gZY2xR+5Ia7tXZM3z6A== X-Received: by 2002:a0c:e805:0:b0:53f:8657:6075 with SMTP id y5-20020a0ce805000000b0053f86576075mr18644821qvn.21.1675457824450; Fri, 03 Feb 2023 12:57:04 -0800 (PST) X-Google-Smtp-Source: AK7set/zNpfcsasdakwFrag7TdhGSMkKUBqgmGYatHdko8oxEDLkVQbfAdbyJg0LDUTHMB0H4pnfaw== X-Received: by 2002:a0c:e805:0:b0:53f:8657:6075 with SMTP id y5-20020a0ce805000000b0053f86576075mr18644787qvn.21.1675457823937; Fri, 03 Feb 2023 12:57:03 -0800 (PST) Received: from [192.168.1.130] (ool-457670bb.dyn.optonline.net. [69.118.112.187]) by smtp.gmail.com with ESMTPSA id r195-20020a37a8cc000000b0071ddbe8fe23sm2571101qke.24.2023.02.03.12.57.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 03 Feb 2023 12:57:03 -0800 (PST) From: Patrick Palka X-Google-Original-From: Patrick Palka Date: Fri, 3 Feb 2023 15:57:02 -0500 (EST) To: Patrick Palka cc: Jason Merrill , gcc-patches@gcc.gnu.org Subject: Re: [PATCH 2/2] c++: speculative constexpr and is_constant_evaluated [PR108243] In-Reply-To: <8ca008ee-d56c-52d9-4f3f-3f738e0e8af6@idea> Message-ID: <608392d4-11a1-5e07-0975-d5a1391d85a2@idea> References: <20230127220250.1896137-1-ppalka@redhat.com> <20230127220250.1896137-2-ppalka@redhat.com> <154813b4-b680-aad5-ce7a-8ea012626eda@redhat.com> <8ca008ee-d56c-52d9-4f3f-3f738e0e8af6@idea> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=US-ASCII X-Spam-Status: No, score=-13.9 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,GIT_PATCH_0,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE,TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: On Fri, 3 Feb 2023, Patrick Palka wrote: > On Mon, 30 Jan 2023, Jason Merrill wrote: > > > On 1/27/23 17:02, Patrick Palka wrote: > > > This PR illustrates that __builtin_is_constant_evaluated currently acts > > > as an optimization barrier for our speculative constexpr evaluation, > > > since we don't want to prematurely fold the builtin to false if the > > > expression in question would be later manifestly constant evaluated (in > > > which case it must be folded to true). > > > > > > This patch fixes this by permitting __builtin_is_constant_evaluated > > > to get folded as false during cp_fold_function, since at that point > > > we're sure we're doing manifestly constant evaluation. To that end > > > we add a flags parameter to cp_fold that controls what mce_value the > > > CALL_EXPR case passes to maybe_constant_value. > > > > > > bootstrapped and rgetsted no x86_64-pc-linux-gnu, does this look OK for > > > trunk? > > > > > > PR c++/108243 > > > > > > gcc/cp/ChangeLog: > > > > > > * cp-gimplify.cc (enum fold_flags): Define. > > > (cp_fold_data::genericize): Replace this data member with ... > > > (cp_fold_data::fold_flags): ... this. > > > (cp_fold_r): Adjust cp_fold_data use and cp_fold_calls. > > > (cp_fold_function): Likewise. > > > (cp_fold_maybe_rvalue): Likewise. > > > (cp_fully_fold_init): Likewise. > > > (cp_fold): Add fold_flags parameter. Don't cache if flags > > > isn't empty. > > > : Pass mce_false to maybe_constant_value > > > if if ff_genericize is set. > > > > > > gcc/testsuite/ChangeLog: > > > > > > * g++.dg/opt/pr108243.C: New test. > > > --- > > > gcc/cp/cp-gimplify.cc | 76 ++++++++++++++++++----------- > > > gcc/testsuite/g++.dg/opt/pr108243.C | 29 +++++++++++ > > > 2 files changed, 76 insertions(+), 29 deletions(-) > > > create mode 100644 gcc/testsuite/g++.dg/opt/pr108243.C > > > > > > diff --git a/gcc/cp/cp-gimplify.cc b/gcc/cp/cp-gimplify.cc > > > index a35cedd05cc..d023a63768f 100644 > > > --- a/gcc/cp/cp-gimplify.cc > > > +++ b/gcc/cp/cp-gimplify.cc > > > @@ -43,12 +43,20 @@ along with GCC; see the file COPYING3. If not see > > > #include "omp-general.h" > > > #include "opts.h" > > > +/* Flags for cp_fold and cp_fold_r. */ > > > + > > > +enum fold_flags { > > > + ff_none = 0, > > > + /* Whether we're being called from cp_fold_function. */ > > > + ff_genericize = 1 << 0, > > > +}; > > > + > > > /* Forward declarations. */ > > > static tree cp_genericize_r (tree *, int *, void *); > > > static tree cp_fold_r (tree *, int *, void *); > > > static void cp_genericize_tree (tree*, bool); > > > -static tree cp_fold (tree); > > > +static tree cp_fold (tree, fold_flags); > > > /* Genericize a TRY_BLOCK. */ > > > @@ -996,9 +1004,8 @@ struct cp_genericize_data > > > struct cp_fold_data > > > { > > > hash_set pset; > > > - bool genericize; // called from cp_fold_function? > > > - > > > - cp_fold_data (bool g): genericize (g) {} > > > + fold_flags flags; > > > + cp_fold_data (fold_flags flags): flags (flags) {} > > > }; > > > static tree > > > @@ -1039,7 +1046,7 @@ cp_fold_r (tree *stmt_p, int *walk_subtrees, void > > > *data_) > > > break; > > > } > > > - *stmt_p = stmt = cp_fold (*stmt_p); > > > + *stmt_p = stmt = cp_fold (*stmt_p, data->flags); > > > if (data->pset.add (stmt)) > > > { > > > @@ -1119,12 +1126,12 @@ cp_fold_r (tree *stmt_p, int *walk_subtrees, void > > > *data_) > > > here rather than in cp_genericize to avoid problems with the > > > invisible > > > reference transition. */ > > > case INIT_EXPR: > > > - if (data->genericize) > > > + if (data->flags & ff_genericize) > > > cp_genericize_init_expr (stmt_p); > > > break; > > > case TARGET_EXPR: > > > - if (data->genericize) > > > + if (data->flags & ff_genericize) > > > cp_genericize_target_expr (stmt_p); > > > /* Folding might replace e.g. a COND_EXPR with a TARGET_EXPR; in > > > @@ -1157,7 +1164,7 @@ cp_fold_r (tree *stmt_p, int *walk_subtrees, void > > > *data_) > > > void > > > cp_fold_function (tree fndecl) > > > { > > > - cp_fold_data data (/*genericize*/true); > > > + cp_fold_data data (ff_genericize); > > > cp_walk_tree (&DECL_SAVED_TREE (fndecl), cp_fold_r, &data, NULL); > > > } > > > @@ -2375,7 +2382,7 @@ cp_fold_maybe_rvalue (tree x, bool rval) > > > { > > > while (true) > > > { > > > - x = cp_fold (x); > > > + x = cp_fold (x, ff_none); > > > if (rval) > > > x = mark_rvalue_use (x); > > > if (rval && DECL_P (x) > > > @@ -2434,7 +2441,7 @@ cp_fully_fold_init (tree x) > > > if (processing_template_decl) > > > return x; > > > x = cp_fully_fold (x); > > > - cp_fold_data data (/*genericize*/false); > > > + cp_fold_data data (ff_none); > > > cp_walk_tree (&x, cp_fold_r, &data, NULL); > > > return x; > > > } > > > @@ -2469,7 +2476,7 @@ clear_fold_cache (void) > > > Function returns X or its folded variant. */ > > > static tree > > > -cp_fold (tree x) > > > +cp_fold (tree x, fold_flags flags) > > > { > > > tree op0, op1, op2, op3; > > > tree org_x = x, r = NULL_TREE; > > > @@ -2490,8 +2497,11 @@ cp_fold (tree x) > > > if (fold_cache == NULL) > > > fold_cache = hash_map::create_ggc (101); > > > - if (tree *cached = fold_cache->get (x)) > > > - return *cached; > > > + bool cache_p = (flags == ff_none); > > > + > > > + if (cache_p) > > > + if (tree *cached = fold_cache->get (x)) > > > + return *cached; > > > uid_sensitive_constexpr_evaluation_checker c; > > > @@ -2526,7 +2536,7 @@ cp_fold (tree x) > > > Don't create a new tree if op0 != TREE_OPERAND (x, 0), the > > > folding of the operand should be in the caches and if in > > > cp_fold_r > > > it will modify it in place. */ > > > - op0 = cp_fold (TREE_OPERAND (x, 0)); > > > + op0 = cp_fold (TREE_OPERAND (x, 0), flags); > > > if (op0 == error_mark_node) > > > x = error_mark_node; > > > break; > > > @@ -2571,7 +2581,7 @@ cp_fold (tree x) > > > { > > > tree p = maybe_undo_parenthesized_ref (x); > > > if (p != x) > > > - return cp_fold (p); > > > + return cp_fold (p, flags); > > > } > > > goto unary; > > > @@ -2763,8 +2773,8 @@ cp_fold (tree x) > > > case COND_EXPR: > > > loc = EXPR_LOCATION (x); > > > op0 = cp_fold_rvalue (TREE_OPERAND (x, 0)); > > > - op1 = cp_fold (TREE_OPERAND (x, 1)); > > > - op2 = cp_fold (TREE_OPERAND (x, 2)); > > > + op1 = cp_fold (TREE_OPERAND (x, 1), flags); > > > + op2 = cp_fold (TREE_OPERAND (x, 2), flags); > > > if (TREE_CODE (TREE_TYPE (x)) == BOOLEAN_TYPE) > > > { > > > @@ -2854,7 +2864,7 @@ cp_fold (tree x) > > > { > > > if (!same_type_p (TREE_TYPE (x), TREE_TYPE (r))) > > > r = build_nop (TREE_TYPE (x), r); > > > - x = cp_fold (r); > > > + x = cp_fold (r, flags); > > > break; > > > } > > > } > > > @@ -2908,7 +2918,7 @@ cp_fold (tree x) > > > int m = call_expr_nargs (x); > > > for (int i = 0; i < m; i++) > > > { > > > - r = cp_fold (CALL_EXPR_ARG (x, i)); > > > + r = cp_fold (CALL_EXPR_ARG (x, i), flags); > > > if (r != CALL_EXPR_ARG (x, i)) > > > { > > > if (r == error_mark_node) > > > @@ -2931,7 +2941,7 @@ cp_fold (tree x) > > > if (TREE_CODE (r) != CALL_EXPR) > > > { > > > - x = cp_fold (r); > > > + x = cp_fold (r, flags); > > > break; > > > } > > > @@ -2944,7 +2954,15 @@ cp_fold (tree x) > > > constant, but the call followed by an INDIRECT_REF is. */ > > > if (callee && DECL_DECLARED_CONSTEXPR_P (callee) > > > && !flag_no_inline) > > > - r = maybe_constant_value (x); > > > + { > > > + mce_value manifestly_const_eval = mce_unknown; > > > + if (flags & ff_genericize) > > > + /* At genericization time it's safe to fold > > > + __builtin_is_constant_evaluated to false. */ > > > + manifestly_const_eval = mce_false; > > > + r = maybe_constant_value (x, /*decl=*/NULL_TREE, > > > + manifestly_const_eval); > > > + } > > > optimize = sv; > > > if (TREE_CODE (r) != CALL_EXPR) > > > @@ -2971,7 +2989,7 @@ cp_fold (tree x) > > > vec *nelts = NULL; > > > FOR_EACH_VEC_SAFE_ELT (elts, i, p) > > > { > > > - tree op = cp_fold (p->value); > > > + tree op = cp_fold (p->value, flags); > > > if (op != p->value) > > > { > > > if (op == error_mark_node) > > > @@ -3002,7 +3020,7 @@ cp_fold (tree x) > > > for (int i = 0; i < n; i++) > > > { > > > - tree op = cp_fold (TREE_VEC_ELT (x, i)); > > > + tree op = cp_fold (TREE_VEC_ELT (x, i), flags); > > > if (op != TREE_VEC_ELT (x, i)) > > > { > > > if (!changed) > > > @@ -3019,10 +3037,10 @@ cp_fold (tree x) > > > case ARRAY_RANGE_REF: > > > loc = EXPR_LOCATION (x); > > > - op0 = cp_fold (TREE_OPERAND (x, 0)); > > > - op1 = cp_fold (TREE_OPERAND (x, 1)); > > > - op2 = cp_fold (TREE_OPERAND (x, 2)); > > > - op3 = cp_fold (TREE_OPERAND (x, 3)); > > > + op0 = cp_fold (TREE_OPERAND (x, 0), flags); > > > + op1 = cp_fold (TREE_OPERAND (x, 1), flags); > > > + op2 = cp_fold (TREE_OPERAND (x, 2), flags); > > > + op3 = cp_fold (TREE_OPERAND (x, 3), flags); > > > if (op0 != TREE_OPERAND (x, 0) > > > || op1 != TREE_OPERAND (x, 1) > > > @@ -3050,7 +3068,7 @@ cp_fold (tree x) > > > /* A SAVE_EXPR might contain e.g. (0 * i) + (0 * j), which, after > > > folding, evaluates to an invariant. In that case no need to wrap > > > this folded tree with a SAVE_EXPR. */ > > > - r = cp_fold (TREE_OPERAND (x, 0)); > > > + r = cp_fold (TREE_OPERAND (x, 0), flags); > > > if (tree_invariant_p (r)) > > > x = r; > > > break; > > > @@ -3069,7 +3087,7 @@ cp_fold (tree x) > > > copy_warning (x, org_x); > > > } > > > - if (!c.evaluation_restricted_p ()) > > > + if (cache_p && !c.evaluation_restricted_p ()) > > > { > > > fold_cache->put (org_x, x); > > > /* Prevent that we try to fold an already folded result again. */ > > > diff --git a/gcc/testsuite/g++.dg/opt/pr108243.C > > > b/gcc/testsuite/g++.dg/opt/pr108243.C > > > new file mode 100644 > > > index 00000000000..4c45dbba13c > > > --- /dev/null > > > +++ b/gcc/testsuite/g++.dg/opt/pr108243.C > > > @@ -0,0 +1,29 @@ > > > +// PR c++/108243 > > > +// { dg-do compile { target c++11 } } > > > +// { dg-additional-options "-O -fdump-tree-original" } > > > + > > > +constexpr int foo() { > > > + return __builtin_is_constant_evaluated() + 1; > > > +} > > > + > > > +#if __cpp_if_consteval > > > +constexpr int bar() { > > > + if consteval { > > > + return 5; > > > + } else { > > > + return 4; > > > + } > > > +} > > > +#endif > > > + > > > +int p, q; > > > + > > > +int main() { > > > + p = foo(); > > > +#if __cpp_if_consteval > > > + q = bar(); > > > +#endif > > > +} > > > + > > > +// { dg-final { scan-tree-dump-not "= foo" "original" } } > > > +// { dg-final { scan-tree-dump-not "= bar" "original" } } > > > > Let's also test a static initializer that can't be fully constant-evaluated. > > D'oh, doing so revealed that cp_fold_function doesn't reach static > initializers; that's taken care of by cp_fully_fold_init. So it seems > we need to make cp_fold when called from the latter entry point to also > assume m_c_e is false. We can't re-use ff_genericize here because that > flag has additional effects in cp_fold_r, so it seems we need another > flag that that only affects the manifestly constant-eval stuff; I called > it ff_mce_false. How does the following look? N.B. cp_fully_fold_init is called only from three places: * from store_init_value shortly after manifestly-constant evualation of the initializer * from split_nonconstant_init * and from check_for_mismatched_contracts So it seems to always be called late enough that we can safely assume m_c_e is false as in cp_fold_function. > > -- >8 -- > > Subject: [PATCH 2/2] c++: speculative constexpr and is_constant_evaluated > [PR108243] > > This PR illustrates that __builtin_is_constant_evaluated currently acts > as an optimization barrier for our speculative constexpr evaluation, > since we don't want to prematurely fold the builtin to false if the > expression in question would be later manifestly constant evaluated (in > which case it must be folded to true). > > This patch fixes this by permitting __builtin_is_constant_evaluated > to get folded as false during cp_fold_function and cp_fully_fold_init, > since at these points we're sure we're done with manifestly constant > evaluation. To that end we add a flags parameter to cp_fold that > controls whether we pass mce_false or mce_unknown to maybe_constant_value > when folding a CALL_EXPR. > > PR c++/108243 > PR c++/97553 > > gcc/cp/ChangeLog: > > * cp-gimplify.cc (enum fold_flags): Define. > (cp_fold_data::genericize): Replace this data member with ... > (cp_fold_data::fold_flags): ... this. > (cp_fold_r): Adjust use of cp_fold_data and calls to cp_fold. > (cp_fold_function): Likewise. > (cp_fold_maybe_rvalue): Likewise. > (cp_fully_fold_init): Likewise. > (cp_fold): Add fold_flags parameter. Don't cache if flags > isn't empty. > : If ff_genericize is set, fold > __builtin_is_constant_evaluated to false and pass mce_false to > maybe_constant_value. > > gcc/testsuite/ChangeLog: > > * g++.dg/opt/is_constant_evaluated1.C: New test. > * g++.dg/opt/is_constant_evaluated2.C: New test. > --- > gcc/cp/cp-gimplify.cc | 88 ++++++++++++------- > .../g++.dg/opt/is_constant_evaluated1.C | 14 +++ > .../g++.dg/opt/is_constant_evaluated2.C | 32 +++++++ > 3 files changed, 104 insertions(+), 30 deletions(-) > create mode 100644 gcc/testsuite/g++.dg/opt/is_constant_evaluated1.C > create mode 100644 gcc/testsuite/g++.dg/opt/is_constant_evaluated2.C > > diff --git a/gcc/cp/cp-gimplify.cc b/gcc/cp/cp-gimplify.cc > index 9929d29981a..590ed787997 100644 > --- a/gcc/cp/cp-gimplify.cc > +++ b/gcc/cp/cp-gimplify.cc > @@ -43,12 +43,26 @@ along with GCC; see the file COPYING3. If not see > #include "omp-general.h" > #include "opts.h" > > +/* Flags for cp_fold and cp_fold_r. */ > + > +enum fold_flags { > + ff_none = 0, > + /* Whether we're being called from cp_fold_function. */ > + ff_genericize = 1 << 0, > + /* Whether we're folding late enough that we could assume > + we're definitely not in a manifestly constant-evaluated > + context. */ > + ff_mce_false = 1 << 1, > +}; > + > +using fold_flags_t = int; > + > /* Forward declarations. */ > > static tree cp_genericize_r (tree *, int *, void *); > static tree cp_fold_r (tree *, int *, void *); > static void cp_genericize_tree (tree*, bool); > -static tree cp_fold (tree); > +static tree cp_fold (tree, fold_flags_t); > > /* Genericize a TRY_BLOCK. */ > > @@ -1012,9 +1026,8 @@ struct cp_genericize_data > struct cp_fold_data > { > hash_set pset; > - bool genericize; // called from cp_fold_function? > - > - cp_fold_data (bool g): genericize (g) {} > + fold_flags_t flags; > + cp_fold_data (fold_flags_t flags): flags (flags) {} > }; > > static tree > @@ -1055,7 +1068,7 @@ cp_fold_r (tree *stmt_p, int *walk_subtrees, void *data_) > break; > } > > - *stmt_p = stmt = cp_fold (*stmt_p); > + *stmt_p = stmt = cp_fold (*stmt_p, data->flags); > > if (data->pset.add (stmt)) > { > @@ -1135,12 +1148,12 @@ cp_fold_r (tree *stmt_p, int *walk_subtrees, void *data_) > here rather than in cp_genericize to avoid problems with the invisible > reference transition. */ > case INIT_EXPR: > - if (data->genericize) > + if (data->flags & ff_genericize) > cp_genericize_init_expr (stmt_p); > break; > > case TARGET_EXPR: > - if (data->genericize) > + if (data->flags & ff_genericize) > cp_genericize_target_expr (stmt_p); > > /* Folding might replace e.g. a COND_EXPR with a TARGET_EXPR; in > @@ -1173,7 +1186,7 @@ cp_fold_r (tree *stmt_p, int *walk_subtrees, void *data_) > void > cp_fold_function (tree fndecl) > { > - cp_fold_data data (/*genericize*/true); > + cp_fold_data data (ff_genericize | ff_mce_false); > cp_walk_tree (&DECL_SAVED_TREE (fndecl), cp_fold_r, &data, NULL); > } > > @@ -2391,7 +2404,7 @@ cp_fold_maybe_rvalue (tree x, bool rval) > { > while (true) > { > - x = cp_fold (x); > + x = cp_fold (x, ff_none); > if (rval) > x = mark_rvalue_use (x); > if (rval && DECL_P (x) > @@ -2450,7 +2463,7 @@ cp_fully_fold_init (tree x) > if (processing_template_decl) > return x; > x = cp_fully_fold (x); > - cp_fold_data data (/*genericize*/false); > + cp_fold_data data (ff_mce_false); > cp_walk_tree (&x, cp_fold_r, &data, NULL); > return x; > } > @@ -2485,7 +2498,7 @@ clear_fold_cache (void) > Function returns X or its folded variant. */ > > static tree > -cp_fold (tree x) > +cp_fold (tree x, fold_flags_t flags) > { > tree op0, op1, op2, op3; > tree org_x = x, r = NULL_TREE; > @@ -2506,8 +2519,11 @@ cp_fold (tree x) > if (fold_cache == NULL) > fold_cache = hash_map::create_ggc (101); > > - if (tree *cached = fold_cache->get (x)) > - return *cached; > + bool cache_p = (flags == ff_none); > + > + if (cache_p) > + if (tree *cached = fold_cache->get (x)) > + return *cached; > > uid_sensitive_constexpr_evaluation_checker c; > > @@ -2542,7 +2558,7 @@ cp_fold (tree x) > Don't create a new tree if op0 != TREE_OPERAND (x, 0), the > folding of the operand should be in the caches and if in cp_fold_r > it will modify it in place. */ > - op0 = cp_fold (TREE_OPERAND (x, 0)); > + op0 = cp_fold (TREE_OPERAND (x, 0), flags); > if (op0 == error_mark_node) > x = error_mark_node; > break; > @@ -2587,7 +2603,7 @@ cp_fold (tree x) > { > tree p = maybe_undo_parenthesized_ref (x); > if (p != x) > - return cp_fold (p); > + return cp_fold (p, flags); > } > goto unary; > > @@ -2779,8 +2795,8 @@ cp_fold (tree x) > case COND_EXPR: > loc = EXPR_LOCATION (x); > op0 = cp_fold_rvalue (TREE_OPERAND (x, 0)); > - op1 = cp_fold (TREE_OPERAND (x, 1)); > - op2 = cp_fold (TREE_OPERAND (x, 2)); > + op1 = cp_fold (TREE_OPERAND (x, 1), flags); > + op2 = cp_fold (TREE_OPERAND (x, 2), flags); > > if (TREE_CODE (TREE_TYPE (x)) == BOOLEAN_TYPE) > { > @@ -2870,7 +2886,7 @@ cp_fold (tree x) > { > if (!same_type_p (TREE_TYPE (x), TREE_TYPE (r))) > r = build_nop (TREE_TYPE (x), r); > - x = cp_fold (r); > + x = cp_fold (r, flags); > break; > } > } > @@ -2890,8 +2906,12 @@ cp_fold (tree x) > { > switch (DECL_FE_FUNCTION_CODE (callee)) > { > - /* Defer folding __builtin_is_constant_evaluated. */ > case CP_BUILT_IN_IS_CONSTANT_EVALUATED: > + /* Defer folding __builtin_is_constant_evaluated unless > + we can assume this isn't a manifestly constant-evaluated > + context. */ > + if (flags & ff_mce_false) > + x = boolean_false_node; > break; > case CP_BUILT_IN_SOURCE_LOCATION: > x = fold_builtin_source_location (x); > @@ -2924,7 +2944,7 @@ cp_fold (tree x) > int m = call_expr_nargs (x); > for (int i = 0; i < m; i++) > { > - r = cp_fold (CALL_EXPR_ARG (x, i)); > + r = cp_fold (CALL_EXPR_ARG (x, i), flags); > if (r != CALL_EXPR_ARG (x, i)) > { > if (r == error_mark_node) > @@ -2947,7 +2967,7 @@ cp_fold (tree x) > > if (TREE_CODE (r) != CALL_EXPR) > { > - x = cp_fold (r); > + x = cp_fold (r, flags); > break; > } > > @@ -2960,7 +2980,15 @@ cp_fold (tree x) > constant, but the call followed by an INDIRECT_REF is. */ > if (callee && DECL_DECLARED_CONSTEXPR_P (callee) > && !flag_no_inline) > - r = maybe_constant_value (x); > + { > + mce_value manifestly_const_eval = mce_unknown; > + if (flags & ff_mce_false) > + /* Allow folding __builtin_is_constant_evaluated to false during > + constexpr evaluation of this call. */ > + manifestly_const_eval = mce_false; > + r = maybe_constant_value (x, /*decl=*/NULL_TREE, > + manifestly_const_eval); > + } > optimize = sv; > > if (TREE_CODE (r) != CALL_EXPR) > @@ -2987,7 +3015,7 @@ cp_fold (tree x) > vec *nelts = NULL; > FOR_EACH_VEC_SAFE_ELT (elts, i, p) > { > - tree op = cp_fold (p->value); > + tree op = cp_fold (p->value, flags); > if (op != p->value) > { > if (op == error_mark_node) > @@ -3018,7 +3046,7 @@ cp_fold (tree x) > > for (int i = 0; i < n; i++) > { > - tree op = cp_fold (TREE_VEC_ELT (x, i)); > + tree op = cp_fold (TREE_VEC_ELT (x, i), flags); > if (op != TREE_VEC_ELT (x, i)) > { > if (!changed) > @@ -3035,10 +3063,10 @@ cp_fold (tree x) > case ARRAY_RANGE_REF: > > loc = EXPR_LOCATION (x); > - op0 = cp_fold (TREE_OPERAND (x, 0)); > - op1 = cp_fold (TREE_OPERAND (x, 1)); > - op2 = cp_fold (TREE_OPERAND (x, 2)); > - op3 = cp_fold (TREE_OPERAND (x, 3)); > + op0 = cp_fold (TREE_OPERAND (x, 0), flags); > + op1 = cp_fold (TREE_OPERAND (x, 1), flags); > + op2 = cp_fold (TREE_OPERAND (x, 2), flags); > + op3 = cp_fold (TREE_OPERAND (x, 3), flags); > > if (op0 != TREE_OPERAND (x, 0) > || op1 != TREE_OPERAND (x, 1) > @@ -3066,7 +3094,7 @@ cp_fold (tree x) > /* A SAVE_EXPR might contain e.g. (0 * i) + (0 * j), which, after > folding, evaluates to an invariant. In that case no need to wrap > this folded tree with a SAVE_EXPR. */ > - r = cp_fold (TREE_OPERAND (x, 0)); > + r = cp_fold (TREE_OPERAND (x, 0), flags); > if (tree_invariant_p (r)) > x = r; > break; > @@ -3085,7 +3113,7 @@ cp_fold (tree x) > copy_warning (x, org_x); > } > > - if (!c.evaluation_restricted_p ()) > + if (cache_p && !c.evaluation_restricted_p ()) > { > fold_cache->put (org_x, x); > /* Prevent that we try to fold an already folded result again. */ > diff --git a/gcc/testsuite/g++.dg/opt/is_constant_evaluated1.C b/gcc/testsuite/g++.dg/opt/is_constant_evaluated1.C > new file mode 100644 > index 00000000000..ee05cbab785 > --- /dev/null > +++ b/gcc/testsuite/g++.dg/opt/is_constant_evaluated1.C > @@ -0,0 +1,14 @@ > +// PR c++/108243 > +// { dg-do compile { target c++11 } } > +// { dg-additional-options "-O -fdump-tree-original" } > + > +struct A { > + constexpr A(int n, int m) : n(n), m(m) { } > + int n, m; > +}; > + > +void f(int n) { > + static A a = {n, __builtin_is_constant_evaluated()}; > +} > + > +// { dg-final { scan-tree-dump-not "is_constant_evaluated" "original" } } > diff --git a/gcc/testsuite/g++.dg/opt/is_constant_evaluated2.C b/gcc/testsuite/g++.dg/opt/is_constant_evaluated2.C > new file mode 100644 > index 00000000000..ed964e20a7a > --- /dev/null > +++ b/gcc/testsuite/g++.dg/opt/is_constant_evaluated2.C > @@ -0,0 +1,32 @@ > +// PR c++/97553 > +// { dg-do compile { target c++11 } } > +// { dg-additional-options "-O -fdump-tree-original" } > + > +constexpr int foo() { > + return __builtin_is_constant_evaluated() + 1; > +} > + > +#if __cpp_if_consteval > +constexpr int bar() { > + if consteval { > + return 5; > + } else { > + return 4; > + } > +} > +#endif > + > +int p, q; > + > +int main() { > + p = foo(); > +#if __cpp_if_consteval > + q = bar(); > +#endif > +} > + > +// { dg-final { scan-tree-dump "p = 1" "original" } } > +// { dg-final { scan-tree-dump-not "= foo" "original" } } > + > +// { dg-final { scan-tree-dump "q = 4" "original" { target c++23 } } } > +// { dg-final { scan-tree-dump-not "= bar" "original" { target c++23 } } } > -- > 2.39.1.388.g2fc9e9ca3c > >