public inbox for gcc-bugs@sourceware.org help / color / mirror / Atom feed
From: "msebor at gcc dot gnu.org" <gcc-bugzilla@gcc.gnu.org> To: gcc-bugs@gcc.gnu.org Subject: [Bug tree-optimization/101830] [12 Regression] Incorrect error messages beginning with r12-2591 (backward jump threader) Date: Thu, 12 Aug 2021 17:35:04 +0000 [thread overview] Message-ID: <bug-101830-4-HLePOZhVDU@http.gcc.gnu.org/bugzilla/> (raw) In-Reply-To: <bug-101830-4@http.gcc.gnu.org/bugzilla/> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101830 --- Comment #6 from Martin Sebor <msebor at gcc dot gnu.org> --- I've only looked at the first warning so far. It's issued for the access in bb 8: <bb 5> [local count: 4057510040]: pos.80_31 = pos; if (pos.80_31 <= 1023) goto <bb 8>; [96.34%] else goto <bb 6>; [3.66%] <bb 8> [local count: 256307115]: # pos.80_21 = PHI <pos.80_81(36)> _1 = linebuf[pos.80_21]; <<< -Warray-bounds ... The index is in the range [1024, INT_MAX] so the warning is correct given the IL. There isn't much I see that could be improved about the diagnostic except mentioning the range of the subscript rather than just its lower bound. This instance of the warning or its phrasing also haven't changed in years. It's not the result of a recent enhancement or a questionable heuristic but simply reflects a change in the IL, and it's always been phrased as "is out of bounds". No "may be out of bounds" form exists, never has, and adding one wouldn't help in this instance. That said, since pos is a global variable, the test in safe_inc_pos() that would otherwise constrain its value only has that effect in the absence of intervening statements that might overwrite it. You might get a better result with a pair of "setter" and "getter" functions where the latter asserted the range via __builtin_unreachable() before returning the variable. Otherwise, the test is what likely is used by the backward threader to introduce the unreachable branch which isn't eliminated because GCC can't prove the variable isn't incremented beyond its upper limit. (Aldy is in a much better position to explain this.)
next prev parent reply other threads:[~2021-08-12 17:35 UTC|newest] Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-08-09 15:54 [Bug tree-optimization/101830] New: " wschmidt at gcc dot gnu.org 2021-08-09 15:57 ` [Bug tree-optimization/101830] " wschmidt at gcc dot gnu.org 2021-08-09 17:55 ` pinskia at gcc dot gnu.org 2021-08-09 19:46 ` [Bug tree-optimization/101830] [12 Regression] " wschmidt at gcc dot gnu.org 2021-08-10 1:30 ` segher at gcc dot gnu.org 2021-08-10 12:33 ` wschmidt at gcc dot gnu.org 2021-08-12 17:35 ` msebor at gcc dot gnu.org [this message] 2021-08-12 17:47 ` wschmidt at gcc dot gnu.org 2021-08-12 17:50 ` msebor at gcc dot gnu.org 2021-08-12 18:06 ` wschmidt at gcc dot gnu.org 2021-08-12 18:44 ` wschmidt at gcc dot gnu.org 2021-08-12 19:54 ` wschmidt at gcc dot gnu.org 2021-08-12 20:15 ` msebor at gcc dot gnu.org 2021-08-12 20:39 ` wschmidt at gcc dot gnu.org 2021-08-23 21:00 ` [Bug target/101830] " cvs-commit at gcc dot gnu.org
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=bug-101830-4-HLePOZhVDU@http.gcc.gnu.org/bugzilla/ \ --to=gcc-bugzilla@gcc.gnu.org \ --cc=gcc-bugs@gcc.gnu.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).