public inbox for gcc-bugs@sourceware.org
help / color / mirror / Atom feed
* [Bug rtl-optimization/104400] New: [12 Regression] v850e lra/reload failure after recent change
@ 2022-02-05 18:57 law at gcc dot gnu.org
  2022-02-07  7:51 ` [Bug rtl-optimization/104400] " rguenth at gcc dot gnu.org
                   ` (5 more replies)
  0 siblings, 6 replies; 7+ messages in thread
From: law at gcc dot gnu.org @ 2022-02-05 18:57 UTC (permalink / raw)
  To: gcc-bugs

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104400

            Bug ID: 104400
           Summary: [12 Regression] v850e lra/reload failure after recent
                    change
           Product: gcc
           Version: 12.0
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: rtl-optimization
          Assignee: unassigned at gcc dot gnu.org
          Reporter: law at gcc dot gnu.org
  Target Milestone: ---

After this change:

commit 85419ac59724b7ce710ebb4acf03dbd747edeea3 (HEAD, refs/bisect/bad)
Author: Vladimir N. Makarov <vmakarov@redhat.com>
Date:   Fri Jan 21 13:34:32 2022 -0500

    [PR103676] LRA: Calculate and exclude some start hard registers for reload
pseudos

    LRA and old reload pass uses only one register class for reload pseudos
even if
    operand constraints contain more one register class.  Let us consider
    constraint 'lh' for thumb arm which means low and high thumb registers.
    Reload pseudo for such constraint will have general reg class (union of
    low and high reg classes).  Assigning the last low register to the reload
    pseudo is wrong if the pseudo is of DImode as it requires two hard regs.
    But it is considered OK if we use general reg class.  The following patch
    solves this problem for LRA.

    gcc/ChangeLog:

            PR target/103676
[ ... ]

The v850e-elf port will no longer build newlib due to a spill failure.

I've narrowed the test down, but haven't done any debugging to see if this is
really an LRA issue or a backend issue.

Compile with -O2 -mv850e3v5 to trigger:
./cc1 -O2 -mv850e3v5 j.c
 frob
Analyzing compilation unit
Performing interprocedural optimizations
 <*free_lang_data> {heap 1200k} <visibility> {heap 1200k} <build_ssa_passes>
{heap 1200k} <opt_local_passes> {heap 1200k} <remove_symbols> {heap 1616k}
<targetclone> {heap 1616k} <free-fnsummary> {heap 1616k} <emutls> {heap
1616k}Streaming LTO
 <whole-program> {heap 1616k} <profile_estimate> {heap 1616k} <icf> {heap
1616k} <devirt> {heap 1616k} <cp> {heap 1616k} <sra> {heap 1616k} <fnsummary>
{heap 1616k} <inline> {heap 1616k} <pure-const> {heap 1616k} <modref> {heap
1616k} <free-fnsummary> {heap 1616k} <static-var> {heap 1616k} <single-use>
{heap 1616k} <comdats> {heap 1616k}Assembling functions:
 frob
j.c: In function 'frob':
j.c:7:1: error: unable to find a register to spill
    7 | }
      | ^
j.c:7:1: error: this is the insn:
(insn 22 26 25 2 (set (mem/c:DI (reg/f:SI 34 .fp) [1 %sfp+-8 S8 A32])
        (reg:DI 52)) "j.c":4:7 1 {*movdi_internal}
     (expr_list:REG_DEAD (reg:DI 52)
        (nil)))
during RTL pass: reload
j.c:7:1: internal compiler error: in lra_split_hard_reg_for, at
lra-assigns.cc:1837


double frob (double r)
{
    r = -r;
    return r;

}

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-02-11 16:57 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-05 18:57 [Bug rtl-optimization/104400] New: [12 Regression] v850e lra/reload failure after recent change law at gcc dot gnu.org
2022-02-07  7:51 ` [Bug rtl-optimization/104400] " rguenth at gcc dot gnu.org
2022-02-09 14:47 ` vmakarov at gcc dot gnu.org
2022-02-09 17:14 ` law at gcc dot gnu.org
2022-02-10 15:22 ` vmakarov at gcc dot gnu.org
2022-02-11 15:50 ` cvs-commit at gcc dot gnu.org
2022-02-11 16:57 ` law at gcc dot gnu.org

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).