public inbox for gcc@gcc.gnu.org
 help / color / mirror / Atom feed
From: Aldy Hernandez <aldyh@redhat.com>
To: Michael Matz <matz@suse.de>
Cc: Richard Biener <richard.guenther@gmail.com>,
	Jeff Law <jeffreyalaw@gmail.com>,
	GCC Mailing List <gcc@gcc.gnu.org>,
	Andrew MacLeod <amacleod@redhat.com>
Subject: Re: More aggressive threading causing loop-interchange-9.c regression
Date: Thu, 9 Sep 2021 15:37:03 +0200	[thread overview]
Message-ID: <7dad1f1f-98e3-f6c7-8cbd-d01122b72260@redhat.com> (raw)
In-Reply-To: <alpine.LSU.2.20.2109091249420.12583@wotan.suse.de>



On 9/9/21 2:52 PM, Michael Matz wrote:
> Hello,
> 
> On Thu, 9 Sep 2021, Aldy Hernandez wrote:
> 
>> The ldist-22 regression is interesting though:
>>
>> void foo ()
>> {
>>    int i;
>>
>>    <bb 2> :
>>    goto <bb 6>; [INV]
>>
>>    <bb 3> :
>>    a[i_1] = 0;
>>    if (i_1 > 100)
>>      goto <bb 4>; [INV]
>>    else
>>      goto <bb 5>; [INV]
>>
>>    <bb 4> :
>>    b[i_1] = i_1;
>>
>>    <bb 5> :
>>    i_8 = i_1 + 1;
>>
>>    <bb 6> :
>>    # i_1 = PHI <0(2), i_8(5)>
>>    if (i_1 <= 1023)
>>      goto <bb 3>; [INV]
>>    else
>>      goto <bb 7>; [INV]
> 
> Here there's no simple latch block to start with (the backedge comes
> directly out of the loop exit block).  So my suggested improvement
> (testing if the latch was empty and only then reject the thread), would
> solve this.

Well, there's the thing.  Loop discovery is marking BB5 as the latch, so 
it's not getting threaded:

Checking profitability of path (backwards):  bb:6 (2 insns) bb:5 (latch)

> 
>> Would it be crazy to suggest that we disable threading through latches
>> altogether,
> 
> I think it wouldn't be crazy, but we can do a bit better as suggested
> above (only reject empty latches, and reject it only for the threaders
> coming before the loop optims).

BTW, I'm not sure your check for the non-last position makes a difference:

> diff --git a/gcc/tree-ssa-threadbackward.c b/gcc/tree-ssa-threadbackward.c
> index 449232c7715..528a753b886 100644
> --- a/gcc/tree-ssa-threadbackward.c
> +++ b/gcc/tree-ssa-threadbackward.c
> @@ -600,6 +600,7 @@ back_threader_profitability::profitable_path_p (const vec<basic_block> &m_path,
>    loop_p loop = m_path[0]->loop_father;
>    bool path_crosses_loops = false;
>    bool threaded_through_latch = false;
> +  bool latch_within_path = false;
>    bool multiway_branch_in_path = false;
>    bool threaded_multiway_branch = false;
>    bool contains_hot_bb = false;
> @@ -725,7 +726,13 @@ back_threader_profitability::profitable_path_p (const vec<basic_block> &m_path,
>  	 the last entry in the array when determining if we thread
>  	 through the loop latch.  */
>        if (loop->latch == bb)
> -	threaded_through_latch = true;
> +	{
> +	  threaded_through_latch = true;
> +	  if (j != 0)
> +	    latch_within_path = true;
> +	  if (dump_file && (dump_flags & TDF_DETAILS))
> +	    fprintf (dump_file, " (latch)");
> +	}
>      }

If the last position being considered is a simple latch, it only has a 
simple outgoing jump.  There's no need to thread that.  You need a block 
with >= 2 succ edges to thread anything.

Aldy


  reply	other threads:[~2021-09-09 13:37 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-07 11:49 Aldy Hernandez
2021-09-07 14:45 ` Michael Matz
2021-09-08 10:44   ` Aldy Hernandez
2021-09-08 13:13     ` Richard Biener
2021-09-08 13:25       ` Aldy Hernandez
2021-09-08 13:49         ` Richard Biener
2021-09-08 16:19           ` Aldy Hernandez
2021-09-08 16:39             ` Michael Matz
2021-09-08 18:13               ` Michael Matz
2021-09-09  6:57                 ` Richard Biener
2021-09-09  7:37                   ` Aldy Hernandez
2021-09-09  7:45                     ` Richard Biener
2021-09-09  8:36                       ` Aldy Hernandez
2021-09-09  8:58                         ` Richard Biener
2021-09-09  9:21                           ` Aldy Hernandez
2021-09-09 10:15                             ` Richard Biener
2021-09-09 11:28                               ` Aldy Hernandez
2021-09-10 15:51                               ` Jeff Law
2021-09-10 16:11                                 ` Aldy Hernandez
2021-09-10 15:43                             ` Jeff Law
2021-09-10 16:05                               ` Aldy Hernandez
2021-09-10 16:21                                 ` Jeff Law
2021-09-10 16:38                                   ` Aldy Hernandez
2021-09-09 16:59                           ` Jeff Law
2021-09-09 12:47                   ` Michael Matz
2021-09-09  8:14                 ` Aldy Hernandez
2021-09-09  8:24                   ` Richard Biener
2021-09-09 12:52                   ` Michael Matz
2021-09-09 13:37                     ` Aldy Hernandez [this message]
2021-09-09 14:44                       ` Michael Matz
2021-09-09 15:07                         ` Aldy Hernandez
2021-09-10  7:04                         ` Aldy Hernandez
2021-09-09 16:54                   ` Jeff Law

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7dad1f1f-98e3-f6c7-8cbd-d01122b72260@redhat.com \
    --to=aldyh@redhat.com \
    --cc=amacleod@redhat.com \
    --cc=gcc@gcc.gnu.org \
    --cc=jeffreyalaw@gmail.com \
    --cc=matz@suse.de \
    --cc=richard.guenther@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).