From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id 18BE43857B87; Thu, 4 Jan 2024 15:24:26 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 18BE43857B87 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1704381867; bh=4WezHewfjdoXRQZqFtjHhoywR4wr+MYrfCAilRvo2Ls=; h=From:To:Subject:Date:In-Reply-To:References:From; b=U2IXSVfP3qo7GWmuKxAhNkk07fDNDIpVYFYeyqeWNEQpf3a14zJei2uuceAOi8k8g y5zs4p7uNnuXVI95ttjdaawGea6HuyBWczr+DLpt7qtrBgptHdJ8e88Oa3FtXhflra 914H1DdajCv2ltn8XwccvzgcVNe268vXbqsyEgMw= From: "hubicka at ucw dot cz" To: gcc-bugs@gcc.gnu.org Subject: [Bug tree-optimization/110852] [14 Regression] ICE: in get_predictor_value, at predict.cc:2695 with -O -fno-tree-fre and __builtin_expect() since r14-2219-geab57b825bcc35 Date: Thu, 04 Jan 2024 15:24:25 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: tree-optimization X-Bugzilla-Version: 14.0 X-Bugzilla-Keywords: ice-on-valid-code X-Bugzilla-Severity: normal X-Bugzilla-Who: hubicka at ucw dot cz X-Bugzilla-Status: NEW X-Bugzilla-Resolution: X-Bugzilla-Priority: P1 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: 14.0 X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D110852 --- Comment #6 from Jan Hubicka --- > which fixes the ICE by preferring PRED_BUILTIN_EXPECT* over others. > At least in this case when one operand is a constant and another one is > __builtin_expect* result that seems like the right choice to me, the fact= that > one operand is constant doesn't mean the outcome of the binary operation = is > unconditionally constant when the other operand is __builtin_expect* base= d. The code attempt to solve an problem with no good solution. If you have two probabilities that certain thing happens, the combination of them is neither correct (since we do not know that the both probabilities are independent) nor the resulting value corresponds to one of the two predictors contributing to the outcome. One exception is when one value is 100% sure, which is the case of PRED_UNCDITIONAL. So I would say we could simply special case 0% and 100% probability and pick the other predictor in that case. > But reading the > "This incorrectly takes precedence over more reliable heuristics predicti= ng > that call > to cold noreturn is likely not going to happen." > in the description makes me wonder whether that is what we want always (t= hough, > I must say I don't understand the cold noreturn argument because > PRED_COLD_FUNCTION is never returned from expr_expected_value_1. But Once expr_epeced_value finishes its job, it attaches precitor to an edge and later all predictions sitting on an edge are combined. If you end up declaring prediction as BUILTIN_EXPECT_WITH_PROBABILITY, the logic combining precitions will believe that the value is very reliable and=20 will ignore other predictors. This is the PRED_FLAG_FIRST_MATCH mode. So computing uncertain probability and declaring it to be BUILTIN_EXPECT_WITH_PROBABILITY is worse than declaring it to be a predictor with PRED_DS_THEORY merging mode (which assumes that the value is detrmined by possibly unreliable heuristics) So I would go with special casing 0% and 100% predictors (which can be simply stripped and forgotten). For the rest we could probably introduce PRED_COMBINED_VALUE which will be like BUILTIN_EXPECT_WITH_PROBABILITY but with DS_THEORY meging mode. It is probably better than nothing, but certainly can not be trusted anymore. >=20 > Oh, another thing is that before the patch there were two spots that merg= ed the > predictors, one for the binary operands (which previously picked the weak= er > predictor and now picks stronger predictor), but also in the PHI handling > (which still picks weaker predictor); shouldn't that be changed to match, > including the !=3D -1 handling? PHI is even more fishy, since we have no idea of probability of entering the basic block with a given edge. We can probably ignore basic blocks that are already having profile_count of 0. Otherwise we may try to do DS theory combination of the incomming values. I can cook up a patch. (also fixing other profile related issue is very high on my TODO now)=