From: "Maciej W. Rozycki" <macro@orcam.me.uk>
To: Segher Boessenkool <segher@kernel.crashing.org>
Cc: gcc-patches@gcc.gnu.org
Subject: Re: [PATCH] Turn on LRA on all targets
Date: Mon, 15 May 2023 22:09:12 +0100 (BST) [thread overview]
Message-ID: <alpine.DEB.2.21.2305152104010.11892@angie.orcam.me.uk> (raw)
In-Reply-To: <20230423203328.GL19790@gate.crashing.org>
On Sun, 23 Apr 2023, Segher Boessenkool wrote:
> > There are extra ICEs in regression testing and code quality is poor; cf.
> > <https://gcc.gnu.org/pipermail/gcc-patches/2022-January/588296.html>.
>
> Do you have something you can show for this? Maybe in a PR?
I have filed no PRs as I didn't assess the collateral damage at the time
I looked at it. I only ran regression-testing with `-mlra' shortly after
I completed MODE_CC conversion and added the option, to see what lies
beyond. And I only added `-mlra' and made minimal changes to make the
compiler build again just to make it easier to proceed towards LRA.
> And, are the ICEs in the generic code, or something vax-specific?
At least some were in generic code, e.g.:
during RTL pass: combine
.../gcc/testsuite/gcc.c-torture/compile/pr101562.c: In function 'foo':
.../gcc/testsuite/gcc.c-torture/compile/pr101562.c:12:1: internal compiler error: in insert, at wide-int.cc:682
Please submit a full bug report,
with preprocessed source if appropriate.
See <https://gcc.gnu.org/bugs/> for instructions.
compiler exited with status 1
FAIL: gcc.c-torture/compile/pr101562.c -O1 (internal compiler error)
FAIL: gcc.c-torture/compile/pr101562.c -O1 (test for excess errors)
(coming from `gcc_checking_assert (precision >= width)'), or:
In file included from .../gcc/testsuite/g++.dg/modules/xtreme-header-2.h:10,
from .../gcc/testsuite/g++.dg/modules/xtreme-header-2_a.H:4:
.../vax-netbsdelf/libstdc++-v3/include/regex:42: internal compiler error: in set_filename, at cp/module.cc:19134
Please submit a full bug report,
with preprocessed source if appropriate.
See <https://gcc.gnu.org/bugs/> for instructions.
compiler exited with status 1
FAIL: g++.dg/modules/xtreme-header-2_a.H -std=c++2b (internal compiler error)
FAIL: g++.dg/modules/xtreme-header-2_a.H -std=c++2b (test for excess errors)
(from `gcc_checking_assert (!filename)'). As I say, I did not assess this
at all back then and the logs are dated Nov 2021 (I had to chase them).
Also I'm not going to dedicate any time now to switch the VAX backend to
LRA, because old reload continues working while we have a non-functional
exception unwinder that never ever worked, as I have recently discovered,
which breaks lots of C++ code, including in particular native VAX/NetBSD
GDB and `gdbserver' (my newly-ported implementation of), which is a bit of
a problem (native VAX/NetBSD GCC has been spared owing to the decision not
to use exceptions).
And fixing the unwinder is going to be a major effort due to how the VAX
CALLS machine instruction works and the stack frame has been consequently
structured; it is unlike any other ELF target, and even if it can be
expressed in DWARF terms (which I'm not entirely sure about), it is going
to require a dedicated handler like with ARM or IA64.
I may choose to implement a non-DWARF unwinder instead, as the VAX stack
frame is always fully described by the hardware and there is never ever a
need for debug information to be able to decode any VAX stack frame (the
RET machine instruction uses the stack frame information to restore the
previous PC, FP, SP, AP and any static registers saved by CALLS).
So implementing a working exception unwinder has to take precedence over
LRA and I do hope to complete it during this release cycle, but I may not
have any time left for LRA.
Please keep this in mind with any plans to drop old reload. I'll highly
appreciate that and I do keep LRA on my radar as the next item to address
after the unwinder, by any means it's not been lost.
Maciej
next prev parent reply other threads:[~2023-05-15 21:09 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-23 16:47 Segher Boessenkool
2023-04-23 17:01 ` Jeff Law
2023-04-23 20:23 ` Segher Boessenkool
2023-04-24 8:42 ` Andreas Schwab
2023-04-23 18:36 ` Paul Koning
2023-04-23 20:19 ` Segher Boessenkool
2023-04-23 18:56 ` Maciej W. Rozycki
2023-04-23 20:33 ` Segher Boessenkool
2023-05-15 21:09 ` Maciej W. Rozycki [this message]
2023-05-15 21:16 ` Sam James
2024-02-15 19:34 ` Sam James
2024-02-15 22:56 ` Segher Boessenkool
2024-02-16 1:41 ` Paul Koning
2024-02-16 10:22 ` Segher Boessenkool
2024-02-15 22:21 ` Paul Koning
2024-02-16 11:34 ` Maciej W. Rozycki
2024-02-16 13:47 ` Segher Boessenkool
2024-02-16 14:23 ` Maciej W. Rozycki
2024-02-16 14:31 ` Jakub Jelinek
2024-02-16 17:01 ` Maciej W. Rozycki
2024-02-17 0:38 ` Maciej W. Rozycki
2024-02-16 14:50 ` Paul Koning
2023-04-23 21:06 ` Uros Bizjak
2023-04-24 9:17 ` Segher Boessenkool
2023-04-24 9:46 ` Uros Bizjak
2023-04-29 14:38 ` Segher Boessenkool
2023-04-24 8:19 ` Richard Biener
2023-04-24 9:44 ` Segher Boessenkool
2023-04-30 19:52 ` Jeff Law
2023-04-29 13:37 Roger Sayle
2023-04-29 15:06 ` Jeff Law
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=alpine.DEB.2.21.2305152104010.11892@angie.orcam.me.uk \
--to=macro@orcam.me.uk \
--cc=gcc-patches@gcc.gnu.org \
--cc=segher@kernel.crashing.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).