From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 2153) id A581E3858C74; Thu, 2 Feb 2023 12:56:23 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org A581E3858C74 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1675342583; bh=QLtbwHGJABHsuXdl5f4clTbn5DbPz7yJ0bbt6B6YzGE=; h=From:To:Subject:Date:From; b=lyUksFV5sNd1nZJpSPLqNp7crLCBNAx7X9t3ryfbSqBP4ng5TyPMuZsbnPH3hR1Xs zYnmMIJMJym2Zi333JyabntSPXrSBSSsUEouPyzifUgrJSDcSXruiDC8/c6oZIJwAX pNsyDYD2/RObA5AImbvoRq0ezglw/LTu3err9Klk= MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="utf-8" From: Jakub Jelinek To: gcc-cvs@gcc.gnu.org Subject: [gcc r13-5652] sched-deps, cselib: Fix up some -fcompare-debug issues and regressions [PR108463] X-Act-Checkin: gcc X-Git-Author: Jakub Jelinek X-Git-Refname: refs/heads/master X-Git-Oldrev: 605d1297b91c2c7c23ccfe669e66dda5791d1f55 X-Git-Newrev: 465a9c51e7d5bafa7a81195b5af20f2a54f22210 Message-Id: <20230202125623.A581E3858C74@sourceware.org> Date: Thu, 2 Feb 2023 12:56:23 +0000 (GMT) List-Id: https://gcc.gnu.org/g:465a9c51e7d5bafa7a81195b5af20f2a54f22210 commit r13-5652-g465a9c51e7d5bafa7a81195b5af20f2a54f22210 Author: Jakub Jelinek Date: Thu Feb 2 13:52:45 2023 +0100 sched-deps, cselib: Fix up some -fcompare-debug issues and regressions [PR108463] On Sat, Jan 14, 2023 at 08:26:00AM -0300, Alexandre Oliva via Gcc-patches wrote: > The testcase used to get scheduled differently depending on the > presence of debug insns with MEMs. It's not clear to me why those > MEMs affected scheduling, but the cselib pre-canonicalization of the > MEM address is not used at all when analyzing debug insns, so the > memory allocation and lookup are pure waste. Somehow, avoiding that > waste fixes the problem, or makes it go latent. Unfortunately, this patch breaks the following testcase. The code in sched_analyze_2 did 2 things: 1) cselib_lookup_from_insn 2) shallow_copy_rtx + cselib_subst_to_values_from_insn Now, 1) is precondition of 2), we can only subst the VALUEs if we have actually looked the address up, but as can be seen on that testcase, we are relying on at least the 1) to be done because we subst the values later on even on DEBUG_INSNs and actually use those when needed. cselib_subst_to_values_from_insn mostly just replaces stuff in the returned rtx, except for: /* This used to happen for autoincrements, but we deal with them properly now. Remove the if stmt for the next release. */ if (! e) { /* Assign a value that doesn't match any other. */ e = new_cselib_val (next_uid, GET_MODE (x), x); } which is like that since 2011, I hope it is never reachable and we should in stage1 replace that with gcc_assert or just remove (then it will segfault on following return e->val_rtx; ). So, I (as done in the patch below) reinstalled the 1) and not 2) for DEBUG_INSNs. This fixed the new testcase, but broke again the PR106746 testcases. I've spent a day debugging that and found the problem is that as documented in a large comment in cselib.cc above n_useless_values variable definition, we spend quite a few effort on making sure that VALUEs created on DEBUG_INSNs don't affect the cselib decisions for non-DEBUG_INSNs such as pruning of useless values etc., but if a VALUE created that way is then looked up/needed from non-DEBUG_INSNs, we promote it to non-debug. The reason for -fcompare-debug failure is that there is one large DEBUG_INSN with 16 MEMs in it mostly with addresses that so far didn't appear in the IL otherwise. Later on, we see an instruction storing into MEM destination and invalidate that MEM. Unfortunately, there is a bug caused by the introduction of SP_DERIVED_VALUE_P where alias.cc isn't able to disambiguate MEMs with sp + optional offset in address vs. MEMs with address being a VALUE having SP_DERIVED_VALUE_P + constant (or the SP_DERIVED_VALUE_P itself), which ought to be possible when REG_VALUES (REGNO (stack_pointer_rtx)) has SP_DERIVED_VALUE_P + constant location. Not sure if I should try to fix that in stage4 or defer for stage1. Anyway, the cselib_invalidate_mem call because of this invalidates basically all MEMs with the exception of 5 which have MEM_EXPRs that guarantee non-aliasing with the sp based store. Unfortunately, n_useless_values which in my understanding should be always the same between -g and -g0 compilations diverges, has 3 more useless values for -g. Now, these were initially VALUEs created for DEBUG_INSN lookups. As I said, cselib.cc has code to promote such VALUEs (well, their location elements) to non-debug if they are looked up from non-DEBUG_INSNs. The problem is that when looking some completely unrelated MEM from a non-DEBUG_INSN we run into a hash collision and so call cselib_hasher::equal to check if the unrelated MEM is equal to the one from DEBUG_INSN only element. The equal static member function calls rtx_equal_for_cselib_1 and if that returns true, promotes the location to non-DEBUG, otherwise returns false. So far so good. But rtx_equal_for_cselib_1 internally performs various other cselib lookups, all done with the non-DEBUG_INSN cselib_current_insn, so they all promote to non-debug. And that is wrong, because if it was -g0 compilation, such hashtable entry wouldn't be there at all (or would be but wouldn't contain that locs element), so with -g0 we wouldn't call that rtx_equal_for_cselib_1 at all. So, I think we need to pretend that such lookup which only happens with -g and not -g0 actually comes from some DEBUG_INSN (note, the lookups rtx_equal_for_cselib_1 does are always with create = 0). The cselib.cc part of the patch does that. BTW, I'm not really sure how: if (num_mems < param_max_cselib_memory_locations && ! canon_anti_dependence (x, false, mem_rtx, GET_MODE (mem_rtx), mem_addr)) { has_mem = true; num_mems++; p = &(*p)->next; continue; } num_mems cap can actually work correctly for -fcompare-debug, I'd think we would need to differentiate between num_debug_mems and num_mems depending on if setting_insn is non-NULL DEBUG_INSN or not. That was one of my suspicions on this testcase, but the number of MEMs was small enough for the param in either case (especially because of the above mentioned missed non-aliasings). But as implemented, I think if we have tons of non-aliased MEMs from DEBUG_INSN setting_insns, we could unchain lots more non-DEBUG MEMs with -g than with -g0. 2023-02-02 Jakub Jelinek PR debug/106746 PR rtl-optimization/108463 PR target/108484 * cselib.cc (cselib_current_insn): Move declaration earlier. (cselib_hasher::equal): For debug only locs, temporarily override cselib_current_insn to their l->setting_insn for the rtx_equal_for_cselib_1 call, so that unsuccessful comparisons don't promote some debug locs. * sched-deps.cc (sched_analyze_2) : For MEMs in DEBUG_INSNs when using cselib call cselib_lookup_from_insn on the address but don't substitute it. * gcc.dg/pr108463.c: New test. Diff: --- gcc/cselib.cc | 28 +++++++++++++++++++++------- gcc/sched-deps.cc | 9 ++++++++- gcc/testsuite/gcc.dg/pr108463.c | 13 +++++++++++++ 3 files changed, 42 insertions(+), 8 deletions(-) diff --git a/gcc/cselib.cc b/gcc/cselib.cc index 4a9b8d9d48d..065867b4a84 100644 --- a/gcc/cselib.cc +++ b/gcc/cselib.cc @@ -80,6 +80,10 @@ struct expand_value_data static rtx cselib_expand_value_rtx_1 (rtx, struct expand_value_data *, int); +/* This is a global so we don't have to pass this through every function. + It is used in new_elt_loc_list to set SETTING_INSN. */ +static rtx_insn *cselib_current_insn; + /* There are three ways in which cselib can look up an rtx: - for a REG, the reg_values table (which is indexed by regno) is used - for a MEM, we recursively look up its address and then follow the @@ -143,11 +147,25 @@ cselib_hasher::equal (const cselib_val *v, const key *x_arg) /* We don't guarantee that distinct rtx's have different hash values, so we need to do a comparison. */ for (l = v->locs; l; l = l->next) - if (rtx_equal_for_cselib_1 (l->loc, x, memmode, 0)) + if (l->setting_insn && DEBUG_INSN_P (l->setting_insn) + && (!cselib_current_insn || !DEBUG_INSN_P (cselib_current_insn))) { - promote_debug_loc (l); - return true; + rtx_insn *save_cselib_current_insn = cselib_current_insn; + /* If l is so far a debug only loc, without debug stmts it + would never be compared to x at all, so temporarily pretend + current instruction is that DEBUG_INSN so that we don't + promote other debug locs even for unsuccessful comparison. */ + cselib_current_insn = l->setting_insn; + bool match = rtx_equal_for_cselib_1 (l->loc, x, memmode, 0); + cselib_current_insn = save_cselib_current_insn; + if (match) + { + promote_debug_loc (l); + return true; + } } + else if (rtx_equal_for_cselib_1 (l->loc, x, memmode, 0)) + return true; return false; } @@ -158,10 +176,6 @@ static hash_table *cselib_hash_table; /* A table to hold preserved values. */ static hash_table *cselib_preserved_hash_table; -/* This is a global so we don't have to pass this through every function. - It is used in new_elt_loc_list to set SETTING_INSN. */ -static rtx_insn *cselib_current_insn; - /* The unique id that the next create value will take. */ static unsigned int next_uid; diff --git a/gcc/sched-deps.cc b/gcc/sched-deps.cc index a9214f67432..2aa6623ad2e 100644 --- a/gcc/sched-deps.cc +++ b/gcc/sched-deps.cc @@ -2605,7 +2605,14 @@ sched_analyze_2 (class deps_desc *deps, rtx x, rtx_insn *insn) case MEM: { - if (!DEBUG_INSN_P (insn)) + if (DEBUG_INSN_P (insn) && sched_deps_info->use_cselib) + { + machine_mode address_mode = get_address_mode (x); + + cselib_lookup_from_insn (XEXP (x, 0), address_mode, 1, + GET_MODE (x), insn); + } + else if (!DEBUG_INSN_P (insn)) { /* Reading memory. */ rtx_insn_list *u; diff --git a/gcc/testsuite/gcc.dg/pr108463.c b/gcc/testsuite/gcc.dg/pr108463.c new file mode 100644 index 00000000000..6e94e7daed9 --- /dev/null +++ b/gcc/testsuite/gcc.dg/pr108463.c @@ -0,0 +1,13 @@ +/* PR rtl-optimization/108463 */ +/* { dg-do compile } */ +/* { dg-options "-O2 -fsched2-use-superblocks -fcompare-debug -Wno-psabi" } */ + +typedef int __attribute__((__vector_size__ (32))) V; +int a; + +void +foo (V v) +{ + a--; + v = (V) v; +}