From: Jeff Law <jeffreyalaw@gmail.com>
To: Vineet Gupta <vineetg@rivosinc.com>,
Manolis Tsamis <manolis.tsamis@vrull.eu>
Cc: gcc-patches@gcc.gnu.org,
Philipp Tomsich <philipp.tomsich@vrull.eu>,
Richard Biener <richard.guenther@gmail.com>,
Jakub Jelinek <jakub@redhat.com>,
gnu-toolchain <gnu-toolchain@rivosinc.com>
Subject: Re: [PATCH v3] Implement new RTL optimizations pass: fold-mem-offsets.
Date: Tue, 18 Jul 2023 22:31:15 -0600 [thread overview]
Message-ID: <93fd5732-145a-94ab-5a9e-5a2eb8bac886@gmail.com> (raw)
In-Reply-To: <59f02830-eae9-6a52-dd63-48a7217905ef@rivosinc.com>
On 7/18/23 17:42, Vineet Gupta wrote:
> Hi Manolis,
>
> On 7/18/23 11:01, Jeff Law via Gcc-patches wrote:
>> Vineet @ Rivos has indicated he stumbled across an ICE with the V3
>> code. Hopefully he'll get a testcase for that extracted shortly.
>
> Yeah, I was trying to build SPEC2017 with this patch and ran into ICE
> for several of them with -Ofast build: The reduced test from 455.nab is
> attached here.
> The issue happens with v2 as well, so not something introduced by v3.
>
> There's ICE in cprop_hardreg which immediately follows f-m-o.
>
>
> The protagonist is ins 93 which starts off in combine as a simple set of
> a DF 0.
>
> | sff.i.288r.combine:(insn 93 337 94 8 (set (reg/v:DF 236 [ e ])
> | sff.i.288r.combine- (const_double:DF 0.0 [0x0.0p+0])) "sff.i":23:11
> 190 {*movdf_hardfloat_rv64}
>
> Subsequently reload transforms it into SP + offset
>
> | sff.i.303r.reload:(insn 93 337 94 9 (set (mem/c:DF (plus:DI (reg/f:DI
> 2 sp)
> | sff.i.303r.reload- (const_int 8 [0x8])) [4 %sfp+-8 S8 A64])
> | sff.i.303r.reload- (const_double:DF 0.0 [0x0.0p+0])) "sff.i":23:11 190
> {*movdf_hardfloat_rv64}
> | sff.i.303r.reload- (expr_list:REG_EQUAL (const_double:DF 0.0 [0x0.0p+0])
>
> It gets processed by f-m-o and lands in cprop_hardreg, where it triggers
> ICE.
>
> | (insn 93 337 523 11 (set (mem/c:DF (plus:DI (reg/f:DI 2 sp)
> | (const_int 8 [0x8])) [4 %sfp+-8 S8 A64])
> | (const_double:DF 0.0 [0x0.0p+0])) "sff.i":23:11 -1
> ^^^
> | (expr_list:REG_EQUAL (const_double:DF 0.0 [0x0.0p+0])
> | (nil)))
> | during RTL pass: cprop_hardreg
>
> Here's my analysis:
>
> f-m-o: do_check_validity() -> insn_invalid_p() tries to recog() a
> modified version of insn 93 (actually there is no change, so perhaps
> something we can optimize later). The corresponding md pattern
> movdf_hardfloat_rv64 no longer matches since it expects REG_P for
> operand0, while reload has converted it into SP + offset. f-m-o then
> does the right thing by invalidating INSN_CODE=-1 for a subsequent
> recog() to work correctly.
> But it seems this -1 lingers into the next pass, and trips up
> copyprop_hardreg_forward_1() -> extract_constrain_insn()
> So I don't know what the right fix here should be.
This is a bug in the RISC-V backend. I actually fixed basically the
same bug in another backend that was exposed by the f-m-o code.
>
> In a run with -fno-fold-mem-offsets, the same insn 93 is successfully
> grok'ed by cprop_hardreg,
>
> | (insn 93 337 522 11 (set (mem/c:DF (plus:DI (reg/f:DI 2 sp)
> | (const_int 8 [0x8])) [4 %sfp+-8 S8 A64])
> | (const_double:DF 0.0 [0x0.0p+0])) "sff.i":23:11 190
> {*movdf_hardfloat_rv64}
> ^^^^^^^^^^^^^^^
> | (expr_list:REG_EQUAL (const_double:DF 0.0 [0x0.0p+0])
> | (nil)))
>
> P.S. I wonder if it is a good idea in general to call recog() post
> reload since the insn could be changed sufficiently to no longer match
> the md patterns. Of course I don't know the answer.
If this ever causes a problem, it's a backend bug. It's that simple.
Conceptually it should always be safe to set INSN_CODE to -1 for any insn.
Odds are for this specific case in the RV backend, we just need a
constraint to store 0.0 into a memory location. That can actually be
implemented as a store from x0 since 0.0 has the bit pattern 0x0. This
is probably a good thing to expose anyway as an optimization and can
move forward independently of the f-m-o patch.
>
> P.S.2 When debugging code, I noticed a minor annoyance in the patch with
> the whole fold_mem_offsets_driver() switch-case indirection. It doesn't
> seem to be serving any purpose, and we could simply call corresponding
> do_* routines in execute () itself.
We were in the process of squashing some of this out of the
implementation. I hadn't looked at the V3 patch to see how much
progress had been made on this yet.
Thanks for digging into this!
jeff
next prev parent reply other threads:[~2023-07-19 4:31 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-13 14:13 Manolis Tsamis
2023-07-13 15:05 ` Manolis Tsamis
2023-07-14 5:35 ` Jeff Law
2023-07-18 17:15 ` Manolis Tsamis
2023-07-18 18:01 ` Jeff Law
2023-07-18 23:42 ` Vineet Gupta
2023-07-19 4:31 ` Jeff Law [this message]
2023-07-19 8:08 ` Manolis Tsamis
2023-07-19 14:16 ` Jeff Law
2023-07-20 6:18 ` Vineet Gupta
2023-07-20 21:59 ` Jeff Law
2023-08-07 14:44 ` Manolis Tsamis
2023-08-07 17:13 ` Jeff Law
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=93fd5732-145a-94ab-5a9e-5a2eb8bac886@gmail.com \
--to=jeffreyalaw@gmail.com \
--cc=gcc-patches@gcc.gnu.org \
--cc=gnu-toolchain@rivosinc.com \
--cc=jakub@redhat.com \
--cc=manolis.tsamis@vrull.eu \
--cc=philipp.tomsich@vrull.eu \
--cc=richard.guenther@gmail.com \
--cc=vineetg@rivosinc.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).