From: Jeff Law <jeffreyalaw@gmail.com>
To: Richard Biener <rguenther@suse.de>, gcc-patches@gcc.gnu.org
Subject: Re: [PATCH] Adjust backwards threader costing of PHIs
Date: Fri, 5 Aug 2022 09:46:29 -0600 [thread overview]
Message-ID: <4fc37a51-ef49-07d8-d927-d8a1f1ce2eeb@gmail.com> (raw)
In-Reply-To: <20220805135803.8DAE2133B5@imap2.suse-dmz.suse.de>
On 8/5/2022 7:58 AM, Richard Biener wrote:
> The following adjusts the costing of PHIs to match how I understand
> the comment and maybe the original intent. The will be no
> non-degenerate PHI nodes remaining on the threaded path but when there
> are alternate path exits PHI nodes at their destinations will likely
> require extra copies on those edges and that's what we want to account
> for.
>
> Jeff added this code long time ago and at least the special-casing
> of (just) m_name does not make sense anymore.
>
> Unfortunately this tweaks heuristics enough to make the threader
> thread an unfortunate path in libgomp/team.c so that
> -Walloca-larger-than diagnoses an allocation of -1 elements. I'm
> not 100% sure this condition is impossible so I've added a guard
> not allocating zero or "less" stack. There's also an uninit
> diagnostic in opts.cc about best_math::m_best_candidate_len that
> looks like a false positive but all but this member are initialized
> so the patch initializes this one as well to avoid this false
> positive.
>
> I have yet to analyze some fallout as well:
>
> FAIL: gcc.dg/uninit-pred-9_b.c bogus warning (test for bogus messages, line 20)
> FAIL: gcc.dg/tree-ssa/phi_on_compare-4.c scan-tree-dump-times dom2 "Removing bas
> ic block" 1
> FAIL: gcc.dg/tree-ssa/pr59597.c scan-tree-dump-times threadfull1 "Registering jump thread" 4
> FAIL: gcc.dg/tree-ssa/pr61839_2.c scan-tree-dump-times evrp "%" 0
> FAIL: gcc.dg/tree-ssa/pr61839_2.c scan-tree-dump-times evrp "972195717 % " 0
> FAIL: gcc.dg/tree-ssa/pr61839_2.c scan-tree-dump-times evrp "972195717 / " 0
> FAIL: gcc.dg/tree-ssa/pr61839_2.c scan-tree-dump-times evrp "__builtin_abort" 1
>
> the early threading fails are because we now account for the mid-exit
> PHI copies and early threading has a limit of a single copied insn.
> But this testcase was never about threading but VRP which seemingly
> regressed meanwhile...
>
> Bootstrapped and tested (with the above FAILs) on
> x86_64-unknown-linux-gnu.
>
> Any comments besides the FAILout?
I don't recall adding that code, but I did find it in the archives.
https://gcc.gnu.org/pipermail/gcc-patches/2016-March/443452.html
But even with that and reviewing the PR, I still don't remember much
about this particular chunk of code. I do recall that I was never
actually happy with pr69196 state and that's why we've kept it open all
these years.
It doesn't look like tree-ssa/pr69196 has regressed, so if you're happy
with the patch, I've got no objections.
jeff
prev parent reply other threads:[~2022-08-05 15:46 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-08-05 13:58 Richard Biener
2022-08-05 15:46 ` Jeff Law [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4fc37a51-ef49-07d8-d927-d8a1f1ce2eeb@gmail.com \
--to=jeffreyalaw@gmail.com \
--cc=gcc-patches@gcc.gnu.org \
--cc=rguenther@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).