From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by sourceware.org (Postfix) with ESMTPS id 33E2B3858C2C for ; Tue, 4 Jan 2022 16:54:20 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 33E2B3858C2C Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-448-BIlib9oZNSy6DaoBrdBCQw-1; Tue, 04 Jan 2022 11:54:19 -0500 X-MC-Unique: BIlib9oZNSy6DaoBrdBCQw-1 Received: by mail-qt1-f198.google.com with SMTP id e30-20020ac8011e000000b002c4bb72a062so22887896qtg.15 for ; Tue, 04 Jan 2022 08:54:18 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=gAdBUl5/Z2rNZhV50KFuA8gxsm8gyBHAsw/LUgQK/UE=; b=hQEQXxZkxKFK4PQbYJaPP949ZJWQUpHM7T3y/0ykq9C7N32yg1OPql9vf/E/U6VWuk IlfbAiTOek6b4wg8yS8TmOj77Cvoq7FeOeJ9UVVizw2RNan6kUMcyYp9M+ICpaE/yX+w 7RyodV/QPLbAFt573OR1MdWJauWmtfDGWIPchqiTh6V6gjFT9v8eWzU9mL4XgG17DlTi PxhtKT4yKtBvM8CCNaWI9P7v7Na/f+fXZUhbzP9RjnyT5awbTv1W+gqXaDW/yrK4vsHe wdgc3tTAupklK0cmg+rAAPVT1Gz2CVUMdp01XfDImOXA2HzAGk8GzBFhcqh2Koz4DGeg u+lA== X-Gm-Message-State: AOAM531weHNjnSR36/rICOpfCzC7DK8H9q3CQ/UCnbEXzDgJyfCvlUZv Sue2yilFDPBZjAaU9HQwF8EYkObZqhINuElenDttXHajQnKd8APlqCTs6qvdBxtzjesIVdLIa7a LKTzaEmZEhNrZSMIHgiVTNKYkbXg6Xeg3DOS0cwPv4ZLr3bqiTYyGFBxbk9oTp2JJS4Y= X-Received: by 2002:a05:620a:2699:: with SMTP id c25mr35689070qkp.511.1641315258020; Tue, 04 Jan 2022 08:54:18 -0800 (PST) X-Google-Smtp-Source: ABdhPJy+XXPRwousxidi2t8BwDtvs6TWkjOYxKmhse/54SkDCXJnVl+B2X/e+S+QotM38T2GmJCZxQ== X-Received: by 2002:a05:620a:2699:: with SMTP id c25mr35689047qkp.511.1641315257665; Tue, 04 Jan 2022 08:54:17 -0800 (PST) Received: from localhost.localdomain (ool-18e40894.dyn.optonline.net. [24.228.8.148]) by smtp.gmail.com with ESMTPSA id az14sm31848214qkb.97.2022.01.04.08.54.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 04 Jan 2022 08:54:16 -0800 (PST) From: Patrick Palka To: gcc-patches@gcc.gnu.org Subject: [PATCH] c++: constexpr base-to-derived conversion with offset 0 [PR103879] Date: Tue, 4 Jan 2022 11:54:08 -0500 Message-Id: <20220104165408.4063383-1-ppalka@redhat.com> X-Mailer: git-send-email 2.34.1.428.gdcc0cd074f MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="US-ASCII" X-Spam-Status: No, score=-14.5 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 04 Jan 2022 16:54:22 -0000 r12-136 made us canonicalize an object/offset pair with negative offset into one with a nonnegative offset, by iteratively absorbing the innermost component into the offset and stopping as soon as the offset becomes nonnegative. This patch strengthens this transformation to make it keep absorbing even if the offset is already 0 as long as the innermost component is at position 0 (and thus absorbing doesn't change the offset). This lets us accept the two constexpr testcases below, which we'd previously reject essentially because cxx_fold_indirect_ref wasn't able to resolve *(B*)&b.D123 (where D123 is the base subobject A at position 0) to just b. Bootstrapped and regtested on x86_64-pc-linux-gnu, does this look OK for trunk? PR c++/103879 gcc/cp/ChangeLog: * constexpr.c (cxx_fold_indirect_ref): Split out object/offset canonicalization step into a local lambda. Strengthen it to absorb more components at position 0. Use it before both calls to cxx_fold_indirect_ref_1. gcc/testsuite/ChangeLog: * g++.dg/cpp1y/constexpr-base2.C: New test. * g++.dg/cpp1y/constexpr-base2a.C: New test. --- gcc/cp/constexpr.c | 38 +++++++++++++------ gcc/testsuite/g++.dg/cpp1y/constexpr-base2.C | 21 ++++++++++ gcc/testsuite/g++.dg/cpp1y/constexpr-base2a.C | 25 ++++++++++++ 3 files changed, 72 insertions(+), 12 deletions(-) create mode 100644 gcc/testsuite/g++.dg/cpp1y/constexpr-base2.C create mode 100644 gcc/testsuite/g++.dg/cpp1y/constexpr-base2a.C diff --git a/gcc/cp/constexpr.c b/gcc/cp/constexpr.c index 72be45c9e87..1ec33a00ee5 100644 --- a/gcc/cp/constexpr.c +++ b/gcc/cp/constexpr.c @@ -5144,6 +5144,25 @@ cxx_fold_indirect_ref (const constexpr_ctx *ctx, location_t loc, tree type, if (!INDIRECT_TYPE_P (subtype)) return NULL_TREE; + /* Canonicalizes the given OBJ/OFF pair by iteratively absorbing + the innermost component into the offset until the offset is + nonnegative, so that cxx_fold_indirect_ref_1 can identify + more folding opportunities. */ + auto canonicalize_obj_off = [] (tree& obj, tree& off) { + while (TREE_CODE (obj) == COMPONENT_REF + && (tree_int_cst_sign_bit (off) || integer_zerop (off))) + { + tree field = TREE_OPERAND (obj, 1); + tree pos = byte_position (field); + if (integer_zerop (off) && integer_nonzerop (pos)) + /* If the offset is already 0, keep going as long as the + component is at position 0. */ + break; + off = int_const_binop (PLUS_EXPR, off, pos); + obj = TREE_OPERAND (obj, 0); + } + }; + if (TREE_CODE (sub) == ADDR_EXPR) { tree op = TREE_OPERAND (sub, 0); @@ -5162,7 +5181,12 @@ cxx_fold_indirect_ref (const constexpr_ctx *ctx, location_t loc, tree type, return op; } else - return cxx_fold_indirect_ref_1 (ctx, loc, type, op, 0, empty_base); + { + tree off = integer_zero_node; + canonicalize_obj_off (op, off); + gcc_assert (integer_zerop (off)); + return cxx_fold_indirect_ref_1 (ctx, loc, type, op, 0, empty_base); + } } else if (TREE_CODE (sub) == POINTER_PLUS_EXPR && tree_fits_uhwi_p (TREE_OPERAND (sub, 1))) @@ -5174,17 +5198,7 @@ cxx_fold_indirect_ref (const constexpr_ctx *ctx, location_t loc, tree type, if (TREE_CODE (op00) == ADDR_EXPR) { tree obj = TREE_OPERAND (op00, 0); - while (TREE_CODE (obj) == COMPONENT_REF - && tree_int_cst_sign_bit (off)) - { - /* Canonicalize this object/offset pair by iteratively absorbing - the innermost component into the offset until the offset is - nonnegative, so that cxx_fold_indirect_ref_1 can identify - more folding opportunities. */ - tree field = TREE_OPERAND (obj, 1); - off = int_const_binop (PLUS_EXPR, off, byte_position (field)); - obj = TREE_OPERAND (obj, 0); - } + canonicalize_obj_off (obj, off); return cxx_fold_indirect_ref_1 (ctx, loc, type, obj, tree_to_uhwi (off), empty_base); } diff --git a/gcc/testsuite/g++.dg/cpp1y/constexpr-base2.C b/gcc/testsuite/g++.dg/cpp1y/constexpr-base2.C new file mode 100644 index 00000000000..7cbf5bf32b7 --- /dev/null +++ b/gcc/testsuite/g++.dg/cpp1y/constexpr-base2.C @@ -0,0 +1,21 @@ +// PR c++/103879 +// { dg-do compile { target c++14 } } + +struct A { + int n = 42; +}; + +struct B : A { }; + +struct C { + B b; +}; + +constexpr int f() { + C c; + A& a = static_cast(c.b); + B& b = static_cast(a); + return b.n; +} + +static_assert(f() == 42, ""); diff --git a/gcc/testsuite/g++.dg/cpp1y/constexpr-base2a.C b/gcc/testsuite/g++.dg/cpp1y/constexpr-base2a.C new file mode 100644 index 00000000000..872e9bb6d6a --- /dev/null +++ b/gcc/testsuite/g++.dg/cpp1y/constexpr-base2a.C @@ -0,0 +1,25 @@ +// PR c++/103879 +// { dg-do compile { target c++14 } } + +struct A { + int n = 42; +}; + +struct Y { int m = 0; }; + +struct X : Y, A { }; + +struct B : X { }; + +struct C { + B b; +}; + +constexpr int f() { + C c; + A& a = static_cast(c.b); + B& b = static_cast(a); + return b.n; +} + +static_assert(f() == 42, ""); -- 2.34.1.428.gdcc0cd074f