From: Jeff Law <jeffreyalaw@gmail.com>
To: Vineet Gupta <vineetg@rivosinc.com>, gcc-patches@gcc.gnu.org
Cc: kito.cheng@gmail.com, Palmer Dabbelt <palmer@rivosinc.com>,
gnu-toolchain@rivosinc.com, Robin Dapp <rdapp.gcc@gmail.com>
Subject: Re: scheduler queue flush (was Re: [gcc-15 0/3] RISC-V improve stack/array access by constant mat tweak)
Date: Thu, 21 Mar 2024 08:45:36 -0600 [thread overview]
Message-ID: <3faf0264-7b82-4574-bb45-df66d77421be@gmail.com> (raw)
In-Reply-To: <2acab452-4dc0-4782-aedf-8495d84d7374@rivosinc.com>
On 3/21/24 8:36 AM, Vineet Gupta wrote:
>
>
> On 3/18/24 21:41, Jeff Law wrote:
>>
>> On 3/16/24 11:35 AM, Vineet Gupta wrote:
>>> Hi,
>>>
>>> This set of patches (for gcc-15) help improve stack/array accesses
>>> by improving constant materialization. Details are in respective
>>> patches.
>>>
>>> The first patch is the main change which improves SPEC cactu by 10%.
>> Just to confirm. Yup, 10% reduction in icounts and about a 3.5%
>> improvement in cycles on our target. Which is great!
>>
>> This also makes me wonder if cactu is the benchmark that was sensitive
>> to flushing the pending queue in the scheduler. Jivan's data would tend
>> to indicate that is the case as several routines seem to flush the
>> pending queue often. In particular:
>>
>> ML_BSSN_RHS_Body
>> ML_BSSN_Advect_Body
>> ML_BSSN_constraints_Body
>>
>> All have a high number of dynamic instructions as well as lots of
>> flushes of the pending queue.
>>
>> Vineet, you might want to look and see if cranking up the
>> max-pending-list-length parameter helps drive down spilling. I think
>> it's default value is 32 insns. I've seen it cranked up to 128 and 256
>> insns without significant ill effects on compile time.
>>
>> My recollection (it's been like 3 years) of the key loop was that it had
>> a few hundred instructions and we'd flush the pending list about 50
>> cycles into the loop as there just wasn't enough issue bandwidth to the
>> FP units to dispatch all the FP instructions as their inputs became
>> ready. So you'd be looking for flushes in a big loop.
>
> Here are the results for Cactu on top of the new splitter changes:
>
> default : 2,565,319,368,591
> 128 : 2,509,741,035,068
> 256 : 2,527,817,813,612
>
> I've haven't probed deeper in generated code itself but likely to be
> helping with spilling
Actually, I read that as not important for this issue. While it is 50b
instructions, I would be looking for something that had perhaps an order
of magnitude bigger impact. Ultimately I think it means we still
don't have a good handle on what's causing the spilling. Oh well.
So if we go back to Robin's observation that scheduling dramatically
increases the instruction count, perhaps we try a run with
-fno-schedule-insns -fno-schedule-insns2 and see how the instruction
counts compare.
Jeff
next prev parent reply other threads:[~2024-03-21 14:45 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-03-16 17:35 [gcc-15 0/3] RISC-V improve stack/array access by constant mat tweak Vineet Gupta
2024-03-16 17:35 ` [gcc-15 1/3] RISC-V: avoid LUI based const materialization ... [part of PR/106265] Vineet Gupta
2024-03-16 20:28 ` Jeff Law
2024-03-19 0:07 ` Vineet Gupta
2024-03-23 5:59 ` Jeff Law
2024-03-16 17:35 ` [gcc-15 2/3] RISC-V: avoid LUI based const mat: keep stack offsets aligned Vineet Gupta
2024-03-16 20:21 ` Jeff Law
2024-03-19 0:27 ` Vineet Gupta
2024-03-19 6:48 ` Andrew Waterman
2024-03-19 13:10 ` Jeff Law
2024-03-19 20:05 ` Vineet Gupta
2024-03-19 20:58 ` Andrew Waterman
2024-03-19 21:17 ` Palmer Dabbelt
2024-03-20 18:57 ` Jeff Law
2024-03-23 6:05 ` Jeff Law
2024-03-16 17:35 ` [gcc-15 3/3] RISC-V: avoid LUI based const mat in prologue/epilogue expansion [PR/105733] Vineet Gupta
2024-03-16 20:27 ` Jeff Law
2024-03-19 4:41 ` [gcc-15 0/3] RISC-V improve stack/array access by constant mat tweak Jeff Law
2024-03-21 0:45 ` Vineet Gupta
2024-03-21 14:36 ` scheduler queue flush (was Re: [gcc-15 0/3] RISC-V improve stack/array access by constant mat tweak) Vineet Gupta
2024-03-21 14:45 ` Jeff Law [this message]
2024-03-21 17:19 ` Vineet Gupta
2024-03-21 19:56 ` Jeff Law
2024-03-22 0:34 ` scheduler queue flush Vineet Gupta
2024-03-22 8:47 ` scheduler queue flush (was Re: [gcc-15 0/3] RISC-V improve stack/array access by constant mat tweak) Richard Biener
2024-03-22 12:29 ` Jeff Law
2024-03-22 16:56 ` Vineet Gupta
2024-03-25 3:05 ` Jeff Law
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3faf0264-7b82-4574-bb45-df66d77421be@gmail.com \
--to=jeffreyalaw@gmail.com \
--cc=gcc-patches@gcc.gnu.org \
--cc=gnu-toolchain@rivosinc.com \
--cc=kito.cheng@gmail.com \
--cc=palmer@rivosinc.com \
--cc=rdapp.gcc@gmail.com \
--cc=vineetg@rivosinc.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).