public inbox for gcc-bugs@sourceware.org
help / color / mirror / Atom feed
From: "crazylht at gmail dot com" <gcc-bugzilla@gcc.gnu.org>
To: gcc-bugs@gcc.gnu.org
Subject: [Bug target/103252] questionable codegen with kmovd
Date: Wed, 24 Nov 2021 05:47:50 +0000 [thread overview]
Message-ID: <bug-103252-4-WxYDDx1rQa@http.gcc.gnu.org/bugzilla/> (raw)
In-Reply-To: <bug-103252-4@http.gcc.gnu.org/bugzilla/>
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103252
--- Comment #14 from Hongtao.liu <crazylht at gmail dot com> ---
(In reply to Hongtao.liu from comment #13)
> >
> > So for short live range reg, we may lose opportunity to allocate best
> > regclass, maybe add peephole2 to handle those cases instead of tune RA.
> No, r132 is also used as addr, but currently lra only add cost of movement
> from mask to gpr, but we could possibly run out of gpr which means there
> will be an extra spill, and this is not counted by record_address_regs.
>
> modified gcc/ira-costs.c
> @@ -1226,7 +1226,7 @@ record_address_regs (machine_mode mode, addr_space_t
> as, rtx x,
> struct costs *pp;
> int *pp_costs;
> enum reg_class i;
> - int k, regno, add_cost;
> + int k, regno, add_cost, potential_spill_cost;
> cost_classes_t cost_classes_ptr;
> enum reg_class *cost_classes;
> move_table *move_in_cost;
> @@ -1239,6 +1239,7 @@ record_address_regs (machine_mode mode, addr_space_t
> as, rtx x,
> ALLOCNO_BAD_SPILL_P (ira_curr_regno_allocno_map[regno]) = true;
> pp = COSTS (costs, COST_INDEX (regno));
> add_cost = (ira_memory_move_cost[Pmode][rclass][1] * scale) / 2;
> + potential_spill_cost = add_cost / 5;
> if (INT_MAX - add_cost < pp->mem_cost)
> pp->mem_cost = INT_MAX;
> else
> @@ -1252,6 +1253,10 @@ record_address_regs (machine_mode mode, addr_space_t
> as, rtx x,
> {
> i = cost_classes[k];
> add_cost = (move_in_cost[i][rclass] * scale) / 2;
> + /* If we run out of rclass regs, there could be an extra spill,
> + Let's say 20% possibility. */
> + if (!ira_class_subset_p[i][rclass])
> + add_cost += potential_spill_cost;
> if (INT_MAX - add_cost < pp_costs[k])
> pp_costs[k] = INT_MAX;
Increase cost will lose some spill to mask opportunity like testcase
https://gcc.godbolt.org/z/KG63ErzEr
prev parent reply other threads:[~2021-11-24 5:47 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-11-15 14:58 [Bug c/103252] New: " jason at zx2c4 dot com
2021-11-15 15:18 ` [Bug c/103252] " jason at zx2c4 dot com
2021-11-15 17:10 ` jason at zx2c4 dot com
2021-11-15 17:52 ` [Bug target/103252] " jakub at gcc dot gnu.org
2021-11-15 17:59 ` pinskia at gcc dot gnu.org
2021-11-15 18:07 ` jason at zx2c4 dot com
2021-11-16 1:20 ` crazylht at gmail dot com
2021-11-16 9:29 ` rguenth at gcc dot gnu.org
2021-11-16 11:51 ` jason at zx2c4 dot com
2021-11-16 11:56 ` jakub at gcc dot gnu.org
2021-11-16 12:06 ` jason at zx2c4 dot com
2021-11-18 7:09 ` crazylht at gmail dot com
2021-11-18 8:26 ` crazylht at gmail dot com
2021-11-19 2:42 ` crazylht at gmail dot com
2021-11-19 6:27 ` crazylht at gmail dot com
2021-11-24 5:47 ` crazylht at gmail dot com [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bug-103252-4-WxYDDx1rQa@http.gcc.gnu.org/bugzilla/ \
--to=gcc-bugzilla@gcc.gnu.org \
--cc=gcc-bugs@gcc.gnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).