public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Richard Sandiford <richard.sandiford@arm.com>
To: Robin Dapp <rdapp.gcc@gmail.com>
Cc: gcc-patches@gcc.gnu.org,  Jeff Law <jeffreyalaw@gmail.com>
Subject: Re: [PATCH] RFC: Add late-combine pass [PR106594]
Date: Sat, 07 Oct 2023 13:58:17 +0100	[thread overview]
Message-ID: <mptcyxqio7q.fsf@arm.com> (raw)
In-Reply-To: <b71a2601-d233-93ad-bb80-3b7a89ab52f3@gmail.com> (Robin Dapp's message of "Mon, 2 Oct 2023 17:18:06 +0200")

Robin Dapp <rdapp.gcc@gmail.com> writes:
> Hi Richard,
>
> cool, thanks.  I just gave it a try with my test cases and it does what
> it is supposed to do, at least if I disable the register pressure check :)
> A cursory look over the test suite showed no major regressions and just
> some overly specific tests.
>
> My test case only works before split, though, as the UNSPEC predicates will
> prevent further combination afterwards.
>
> Right now the (pre-RA) code combines every instance disregarding the actual
> pressure and just checking if the "new" value does not occupy more registers
> than the old one.
>
> - Shouldn't the "pressure" also depend on the number of available hard regs
> (i.e. an nregs = 2 is not necessarily worse than nregs = 1 if we have 32
> hard regs in the new class vs 16 in the old one)?

Right, that's what I meant by extending/tweaking the pressure heuristics
for your case.

> - I assume/hope you expected my (now obsolete) fwprop change could be re-used?

Yeah, I was hoping you'd be able to apply similar heuristics to the new pass.
(I didn't find time to look at the old heuristics in detail, though, sorry.)

I suppose the point of comparison would then be "new pass with current
heuristics" vs. "new pass with relaxed heuristics".

It'd be a good/interesting test of the new heuristics to apply them
without any constraint on the complexity of the SET_SRC.

> Otherwise we wouldn't want to unconditionally "propagate" into a loop for example?
> For my test case the combination of the vec_duplicate into all insns leads
> to "high" register pressure that we could avoid.
>
> How should we continue here?  I suppose you'll first want to get this version
> to the trunk before complicating it further.

Yeah, that'd probably be best.  I need to split the patch up into a
proper submission sequence, do more testing, and make it RFA quality.
Jeff has also found a couple of regressions that I need to look at.

But the substance probably won't change much, so I don't think you'd
be wasting your time if you developed the heuristics based on the
current version.  I'd be happy to review them on that basis too
(though time is short at the moment).

Thanks,
Richard

  reply	other threads:[~2023-10-07 12:58 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-26 16:21 Richard Sandiford
2023-09-28 12:37 ` Jeff Law
2023-10-02 15:18 ` Robin Dapp
2023-10-07 12:58   ` Richard Sandiford [this message]
2023-10-10 18:01     ` Jeff Law

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=mptcyxqio7q.fsf@arm.com \
    --to=richard.sandiford@arm.com \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=jeffreyalaw@gmail.com \
    --cc=rdapp.gcc@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).