public inbox for gcc-cvs@sourceware.org
help / color / mirror / Atom feed
* [gcc(refs/vendors/redhat/heads/gcc-8-branch)] aarch64: Tidy aarch64_split_compare_and_swap
@ 2020-09-17 16:58 Jakub Jelinek
0 siblings, 0 replies; only message in thread
From: Jakub Jelinek @ 2020-09-17 16:58 UTC (permalink / raw)
To: gcc-cvs
https://gcc.gnu.org/g:fd7a5f21ceb5fb6247bd42110e0c158a672b8ed6
commit fd7a5f21ceb5fb6247bd42110e0c158a672b8ed6
Author: Andre Vieira <andre.simoesdiasvieira@arm.com>
Date: Thu Apr 16 10:16:13 2020 +0100
aarch64: Tidy aarch64_split_compare_and_swap
2020-04-16 Andre Vieira <andre.simoesdiasvieira@arm.com>
Backport from mainline.
2019-09-19 Richard Henderson <richard.henderson@linaro.org>
* config/aarch64/aarch64 (aarch64_split_compare_and_swap): Disable
strong_zero_p for aarch64_track_speculation; unify some code paths;
use aarch64_gen_compare_reg instead of open-coding.
Diff:
---
gcc/ChangeLog | 11 ++++++++++-
gcc/config/aarch64/aarch64.c | 40 +++++++++++++---------------------------
2 files changed, 23 insertions(+), 28 deletions(-)
diff --git a/gcc/ChangeLog b/gcc/ChangeLog
index c15d9221b04..666dedef798 100644
--- a/gcc/ChangeLog
+++ b/gcc/ChangeLog
@@ -1,3 +1,12 @@
+2020-04-16 Andre Vieira <andre.simoesdiasvieira@arm.com>
+
+ Backport from mainline.
+ 2019-09-19 Richard Henderson <richard.henderson@linaro.org>
+
+ * config/aarch64/aarch64 (aarch64_split_compare_and_swap): Disable
+ strong_zero_p for aarch64_track_speculation; unify some code paths;
+ use aarch64_gen_compare_reg instead of open-coding.
+
2020-04-16 Andre Vieira <andre.simoesdiasvieira@arm.com>
Backport from mainline
@@ -9,7 +18,7 @@
2020-04-16 Andre Vieira <andre.simoesdiasvieira@arm.com>
- Backport from mainline.
+ Backport from mainline
2019-09-19 Richard Henderson <richard.henderson@linaro.org>
* config/aarch64/aarch64.c (aarch64_gen_compare_reg): Add support
diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c
index 09e78313489..2df5bf3db97 100644
--- a/gcc/config/aarch64/aarch64.c
+++ b/gcc/config/aarch64/aarch64.c
@@ -14359,13 +14359,11 @@ aarch64_split_compare_and_swap (rtx operands[])
/* Split after prolog/epilog to avoid interactions with shrinkwrapping. */
gcc_assert (epilogue_completed);
- rtx rval, mem, oldval, newval, scratch;
+ rtx rval, mem, oldval, newval, scratch, x, model_rtx;
machine_mode mode;
bool is_weak;
rtx_code_label *label1, *label2;
- rtx x, cond;
enum memmodel model;
- rtx model_rtx;
rval = operands[0];
mem = operands[1];
@@ -14386,7 +14384,7 @@ aarch64_split_compare_and_swap (rtx operands[])
CBNZ scratch, .label1
.label2:
CMP rval, 0. */
- bool strong_zero_p = !is_weak && oldval == const0_rtx && mode != TImode;
+ bool strong_zero_p = (!is_weak && oldval == const0_rtx && mode != TImode);
label1 = NULL;
if (!is_weak)
@@ -14399,26 +14397,20 @@ aarch64_split_compare_and_swap (rtx operands[])
/* The initial load can be relaxed for a __sync operation since a final
barrier will be emitted to stop code hoisting. */
if (is_mm_sync (model))
- aarch64_emit_load_exclusive (mode, rval, mem,
- GEN_INT (MEMMODEL_RELAXED));
+ aarch64_emit_load_exclusive (mode, rval, mem, GEN_INT (MEMMODEL_RELAXED));
else
aarch64_emit_load_exclusive (mode, rval, mem, model_rtx);
if (strong_zero_p)
- {
- x = gen_rtx_NE (VOIDmode, rval, const0_rtx);
- x = gen_rtx_IF_THEN_ELSE (VOIDmode, x,
- gen_rtx_LABEL_REF (Pmode, label2), pc_rtx);
- aarch64_emit_unlikely_jump (gen_rtx_SET (pc_rtx, x));
- }
+ x = gen_rtx_NE (VOIDmode, rval, const0_rtx);
else
{
- cond = aarch64_gen_compare_reg_maybe_ze (NE, rval, oldval, mode);
- x = gen_rtx_NE (VOIDmode, cond, const0_rtx);
- x = gen_rtx_IF_THEN_ELSE (VOIDmode, x,
- gen_rtx_LABEL_REF (Pmode, label2), pc_rtx);
- aarch64_emit_unlikely_jump (gen_rtx_SET (pc_rtx, x));
+ rtx cc_reg = aarch64_gen_compare_reg_maybe_ze (NE, rval, oldval, mode);
+ x = gen_rtx_NE (VOIDmode, cc_reg, const0_rtx);
}
+ x = gen_rtx_IF_THEN_ELSE (VOIDmode, x,
+ gen_rtx_LABEL_REF (Pmode, label2), pc_rtx);
+ aarch64_emit_unlikely_jump (gen_rtx_SET (pc_rtx, x));
aarch64_emit_store_exclusive (mode, scratch, mem, newval, model_rtx);
@@ -14430,22 +14422,16 @@ aarch64_split_compare_and_swap (rtx operands[])
aarch64_emit_unlikely_jump (gen_rtx_SET (pc_rtx, x));
}
else
- {
- cond = gen_rtx_REG (CCmode, CC_REGNUM);
- x = gen_rtx_COMPARE (CCmode, scratch, const0_rtx);
- emit_insn (gen_rtx_SET (cond, x));
- }
+ aarch64_gen_compare_reg (NE, scratch, const0_rtx);
emit_label (label2);
+
/* If we used a CBNZ in the exchange loop emit an explicit compare with RVAL
to set the condition flags. If this is not used it will be removed by
later passes. */
if (strong_zero_p)
- {
- cond = gen_rtx_REG (CCmode, CC_REGNUM);
- x = gen_rtx_COMPARE (CCmode, rval, const0_rtx);
- emit_insn (gen_rtx_SET (cond, x));
- }
+ aarch64_gen_compare_reg (NE, rval, const0_rtx);
+
/* Emit any final barrier needed for a __sync operation. */
if (is_mm_sync (model))
aarch64_emit_post_barrier (model);
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2020-09-17 16:58 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-17 16:58 [gcc(refs/vendors/redhat/heads/gcc-8-branch)] aarch64: Tidy aarch64_split_compare_and_swap Jakub Jelinek
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).