From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by sourceware.org (Postfix) with ESMTPS id 2DC8D3858439 for ; Wed, 27 Oct 2021 20:37:09 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 2DC8D3858439 Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-461-cS75c60aPqScbc6t6Lappw-1; Wed, 27 Oct 2021 16:37:07 -0400 X-MC-Unique: cS75c60aPqScbc6t6Lappw-1 Received: by mail-qv1-f70.google.com with SMTP id c15-20020a0cd60f000000b0038509b60a93so3180267qvj.20 for ; Wed, 27 Oct 2021 13:37:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent:subject :content-language:to:cc:references:from:in-reply-to :content-transfer-encoding; bh=OBNSGQIYZDNpnZdU63SLi1KvI5UN7LnzKiNf+tGhKiE=; b=LUhhL/b4J0uKjmLJUWlQMTf3NVSyYziTnpXeBO1GogZHQwD4wu2HqIBln2c1dnqtPe zAymVrOgsg1THr/8oaC68Ut78398afRSxi88RPHE3+fDVj/qlhxC7uWSC/HYQDwsdMFM BG9SFNjGPreqB5Giyyd6QW368e5APyT9Dsp7JuuJlPr4CyAw/D4bmz0HByRKCv85aDqV YUj6RC8B/YWX87E5y9qrKKjzOwsO4FwAM6qKUAph4sj9Oq49s+WEYt6p8GuFBfzXys4m A+wGdToShSOKkvp/GOcDkKdMnYNzvS6g4XxjpMHfaXEOyezBNzXyQrAlSCpJ4K09aYbI cGWA== X-Gm-Message-State: AOAM532gV1aT7dKWdrKJnjdN0qA+s3O7VwUF+P5BF7ogsKkOHfY+04uh 1uCxQLtjrcCIlcIxLwG9Jn5ScO67uv6bYAAgcp8EMcBfxoC2nRKMKNL6dyiaqdos5jaLNALy/wi LHerDQ2TneH9mIcAxjw== X-Received: by 2002:ac8:4642:: with SMTP id f2mr49582qto.236.1635367026684; Wed, 27 Oct 2021 13:37:06 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx6eesmfWVTIc/HxFOKmakg8/YLR+ryxgddDm1aernj19qiCmdYV47/0+gm8Bl4sahH6kKpxw== X-Received: by 2002:ac8:4642:: with SMTP id f2mr49556qto.236.1635367026362; Wed, 27 Oct 2021 13:37:06 -0700 (PDT) Received: from [192.168.1.149] (130-44-159-43.s15913.c3-0.arl-cbr1.sbo-arl.ma.cable.rcncustomer.com. [130.44.159.43]) by smtp.gmail.com with ESMTPSA id t22sm647575qtw.21.2021.10.27.13.37.04 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 27 Oct 2021 13:37:05 -0700 (PDT) Message-ID: <8f7e4b0e-0de6-b80c-387b-81ebfd340bab@redhat.com> Date: Wed, 27 Oct 2021 16:37:04 -0400 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.2.0 Subject: Re: [PATCH] c++: quadratic constexpr behavior for left-assoc logical exprs [PR102780] To: Patrick Palka , Jakub Jelinek Cc: gcc-patches List References: <20211026174418.3142626-1-ppalka@redhat.com> <20211026184544.GA304296@tucnak> <99c81449-d68a-31c8-2ce6-fff154508c9@idea> <20211026211108.GE304296@tucnak> <40dcf6d9-a4a-227a-9553-43c13a42b28e@idea> From: Jason Merrill In-Reply-To: <40dcf6d9-a4a-227a-9553-43c13a42b28e@idea> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-14.2 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, NICE_REPLY_A, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 27 Oct 2021 20:37:10 -0000 On 10/27/21 14:54, Patrick Palka wrote: > On Tue, 26 Oct 2021, Jakub Jelinek wrote: > >> On Tue, Oct 26, 2021 at 05:07:43PM -0400, Patrick Palka wrote: >>> The performance impact of the other calls to cxx_eval_outermost_const_expr >>> from p_c_e_1 is probably already mostly mitigated by the constexpr call >>> cache and the fact that we try to evaluate all calls to constexpr >>> functions during cp_fold_function anyway (at least with -O). So trial >> >> constexpr function bodies don't go through cp_fold_function (intentionally, >> so that we don't optimize away UB), the bodies are copied before the trees of the >> normal copy are folded. > > Ah right, I had forgotten about that.. > > Here's another approach that doesn't need to remove trial evaluation for > &&/||. The idea is to first quietly check if the second operand is > potentially constant _before_ performing trial evaluation of the first > operand. This speeds up the case we care about (both operands are > potentially constant) without regressing any diagnostics. We have to be > careful about emitting bogus diagnostics when tf_error is set, hence the > first hunk below which makes p_c_e_1 always proceed quietly first, and > replay noisily in case of error (similar to how satisfaction works). > > Would something like this be preferable? Seems plausible, though doubling the number of stack frames is a downside. What did you think of Jakub's suggestion of linearizing the terms? > -- >8 -- > > gcc/cp/ChangeLog: > > * constexpr.c (potential_constant_expression_1): When tf_error is > set, proceed quietly first and return true if successful. > : When tf_error is not set, check potentiality > of the second operand before performing trial evaluation of the > first operand rather than after. > > > diff --git a/gcc/cp/constexpr.c b/gcc/cp/constexpr.c > index 6f83d303cdd..821bd41d994 100644 > --- a/gcc/cp/constexpr.c > +++ b/gcc/cp/constexpr.c > @@ -8056,6 +8056,14 @@ potential_constant_expression_1 (tree t, bool want_rval, bool strict, bool now, > #define RECUR(T,RV) \ > potential_constant_expression_1 ((T), (RV), strict, now, flags, jump_target) > > + if (flags & tf_error) > + { > + flags &= ~tf_error; > + if (RECUR (t, want_rval)) > + return true; > + flags |= tf_error; > + } > + > enum { any = false, rval = true }; > int i; > tree tmp; > @@ -8892,13 +8900,16 @@ potential_constant_expression_1 (tree t, bool want_rval, bool strict, bool now, > tmp = boolean_false_node; > truth: > { > - tree op = TREE_OPERAND (t, 0); > - if (!RECUR (op, rval)) > + tree op0 = TREE_OPERAND (t, 0); > + tree op1 = TREE_OPERAND (t, 1); > + if (!RECUR (op0, rval)) > return false; > + if (!(flags & tf_error) && RECUR (op1, rval)) > + return true; > if (!processing_template_decl) > - op = cxx_eval_outermost_constant_expr (op, true); > - if (tree_int_cst_equal (op, tmp)) > - return RECUR (TREE_OPERAND (t, 1), rval); > + op0 = cxx_eval_outermost_constant_expr (op0, true); > + if (tree_int_cst_equal (op0, tmp)) > + return (flags & tf_error) ? RECUR (op1, rval) : false; > else > return true; > } >