From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 92870 invoked by alias); 24 Apr 2015 18:25:30 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 92860 invoked by uid 89); 24 Apr 2015 18:25:30 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-3.6 required=5.0 tests=AWL,BAYES_00,SPF_HELO_PASS,SPF_PASS,T_RP_MATCHES_RCVD autolearn=ham version=3.3.2 X-HELO: mx1.redhat.com Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES256-GCM-SHA384 encrypted) ESMTPS; Fri, 24 Apr 2015 18:25:28 +0000 Received: from int-mx13.intmail.prod.int.phx2.redhat.com (int-mx13.intmail.prod.int.phx2.redhat.com [10.5.11.26]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id t3OIPQr2003663 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Fri, 24 Apr 2015 14:25:26 -0400 Received: from [10.10.116.43] ([10.10.116.43]) by int-mx13.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id t3OIPPRw003118; Fri, 24 Apr 2015 14:25:26 -0400 Message-ID: <553A8A8E.8070403@redhat.com> Date: Fri, 24 Apr 2015 18:25:00 -0000 From: Jason Merrill User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.5.0 MIME-Version: 1.0 To: Kai Tietz CC: gcc-patches List Subject: Re: C++ delayed folding branch review References: <5539C519.8070004@redhat.com> In-Reply-To: Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-SW-Source: 2015-04/txt/msg01520.txt.bz2 On 04/24/2015 09:46 AM, Kai Tietz wrote: > Sure, we can use here instead *_fully_fold, but for what costs? In > general we need to deal here a simple one-level fold for simplifying > constant-values, and/or removing useless type-conversions. Well, here you're doing a two-level fold. And in general fold relies on subexpressions being appropriately folded. So for warning folding, I think the caching approach we were talking about in the call today is the way to go. >>> @@ -597,9 +597,9 @@ null_member_pointer_value_p (tree t) >>> return false; >>> else if (TYPE_PTRMEMFUNC_P (type)) >>> return (TREE_CODE (t) == CONSTRUCTOR >>> - && integer_zerop (CONSTRUCTOR_ELT (t, 0)->value)); >>> + && integer_zerop (fold (CONSTRUCTOR_ELT (t, 0)->value))); >>> else if (TYPE_PTRDATAMEM_P (type)) >>> - return integer_all_onesp (t); >>> + return integer_all_onesp (fold (t)); >> >> >> Calling fold here is wrong; it doesn't handle constexpr, and we should have >> folded before we got here. > > s.o. we need to make sure constant-values get rid of useless > types-conversions/negates/etc ... Certainly a constant value from maybe_constant_value should not have a useless type conversion wrapped around it, as that makes it appear non-constant. > Well, fold_convert isn't necessarily the same as fold (convert ()) > within C++, due convert handles special cases fold_convert doesn't. Ah, true. >>> @@ -576,7 +576,6 @@ build_simple_base_path (tree expr, tree binfo) >>> expr = build3 (COMPONENT_REF, >>> cp_build_qualified_type (type, type_quals), >>> expr, field, NULL_TREE); >>> - expr = fold_if_not_in_template (expr); >> >> I don't think we need to remove this fold, since it is part of compiler >> internals rather than something the user wrote. Really, we should represent >> the base conversion with something like a CONVERT_EXPR and only call this >> function when we want to fold it. But that can wait for a later patch. > > Ok. I remove this fold-case due simply removing > fold_if_not_in_template function. So well, we could re-add a call for > fold, if not in template. Let's try not checking for being in a template, see if it breaks. >> That said, we should probably just remove this case and the next, as they >> are obsolete. I'll remove them on the trunk. Done. >>> +static tree >>> +cp_fold (tree x, hash_map *fold_hash) >>> +{ >> >> .... >> >> I still think we need a hybrid of this and the constexpr code: it isn't full >> folding if we aren't doing constexpr evaluation. But we can't just use >> maybe_constant_value because that only folds C++ constant-expressions, and >> we want to fold more things than that. I suppose one simple approach for >> now would be to call maybe_constant_value from cp_fold. > > Well, the functionality of cp_fold and maybe_constant_value (well, > actually how constexpr.c works) are different in cases of > non-constant results. I think that's exactly what I was saying above: "we can't just use maybe_constant_value because that only folds C++ constant-expressions, and we want to fold more things than that." My point is that cp_fold should be a superset of maybe_constant_value, to fix bugs like 53792. And the easiest way to get that would seem to be by calling maybe_constant_value from cp_fold. >>> @@ -614,9 +614,13 @@ cp_fold_convert (tree type, tree expr) >>> } >>> else >>> { >>> - conv = fold_convert (type, expr); >>> + if (TREE_CODE (expr) == INTEGER_CST) >>> + conv = fold_convert (type, expr); >>> + else >>> + conv = convert (type, expr); >> >> Why? If we're calling cp_fold_convert in a place where we don't want to >> fold, we should stop calling it rather than change it. > See, that we want to take care that constant-value is found here. > Otherwise we don't want anything folded. Well, we could introduce > for this a special routine to abstract intention here. OK, that makes sense. Say, a function called like "fold" that only folds conversions (and NEGATE_EXPR) of constants. It might make sense to do that and otherwise continue to delay folding of conversions. In that context I guess this change makes sense. >>> @@ -8502,16 +8502,18 @@ compute_array_index_type (tree name, tree size, >>> tsubst_flags_t complain) >>> + /* We need to do fully folding to determine if we have VLA, or not. */ >>> + tree size_constant = maybe_constant_value (size); >> >> Why is this needed? We already call maybe_constant_value earlier in >> compute_array_index_type. > > Well, see above. We might have constant-value not simplified. So we > need a way to make sure we simplify in such case, but if it is > none-constant, we don't want an modified expression. So > maybe_constant_value does this ... Yes, but we already called maybe_constant_value. Calling it again shouldn't make any difference. >>> - itype = fold (itype); >>> + itype = maybe_constant_value (itype); >>> - itype = variable_size (fold (newitype)); >>> + itype = variable_size (maybe_constant_value (newitype)); >> >> Maybe these should use cp_fully_fold? > > We could use fully_fold, but we would also modify none-constant > expressions by this. Do we actually want that here? For the first one, I actually think a plain fold is enough, or perhaps change the cp_build_binary_op to size_binop. For the second, can we drop that whole block (the one starting with "if (TREE_CODE (itype) != SAVE_EXPR)") and rely on late folding to handle SIZEOF_EXPR? >>> @@ -13090,6 +13092,8 @@ build_enumerator (tree name, tree value, tree >>> enumtype, location_t loc) >>> + if (value) >>> + value = maybe_constant_value (value); >> >> >> This seems unnecessary, since we call cxx_constant_value below. > > See the other places As above, calling the constexpr code twice shouldn't make a difference. >>> - gcc_assert (val1->v.val_unsigned == DWARF2_ADDR_SIZE); >>> + gcc_assert (val1->v.val_unsigned >>> + == (unsigned HOST_WIDE_INT) DWARF2_ADDR_SIZE); >> >> >> We need to fix this warning so this change is unnecessary. > > Well, those following changes got already acked by Jeff for 4.9. Because we want the compiler to build with a wide range of other compilers. It's still a warning regression that needs to be fixed. >>> - gcc_assert (TREE_OPERAND (t, 0) == decl); >>> + gcc_assert (TREE_OPERAND (t, 0) == decl || TREE_OPERAND (t, 1) == >>> decl); >> >> >> This change doesn't seem to have anything to do with delayed folding. > > It gets raised by delayed folding, so it is required to run. We're in gimplification, so we should have been through cp_fold, so this should have been folded appropriately by now. >>> || (gimple_omp_for_kind (for_stmt) >>> - == GF_OMP_FOR_KIND_CILKFOR)); >>> + == GF_OMP_FOR_KIND_CILKFOR) >>> + || (gimple_omp_for_kind (for_stmt) >>> + == GF_OMP_FOR_KIND_FOR)); >> >> Nor this one. > > Same thing ... How does delayed folding result in a different OMP_FOR_KIND? Jason