From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 2153) id A1A3B398794B; Thu, 17 Sep 2020 16:58:10 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org A1A3B398794B DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1600361890; bh=k8kZJm/UvMNHk68OLWzvYM+x4PjIZzomomK9t9Gpk4M=; h=From:To:Subject:Date:From; b=ZRPFFFrNyAwaekKcVB7cKS9swh/QlvJ0S8uTkACvC1ojzr1gaQ3LSVWHMdYrD2Toa J7dOLWp+agv2kAYkNtmVvmarYJOSm7XG4oC12lNlBGaxAxW9q/QSFqVl9LnG3BxUSG EgmQ5pGJohsNu8fGEHuX6/433aFEpAj4SLO7diIE= Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: Jakub Jelinek To: gcc-cvs@gcc.gnu.org Subject: [gcc(refs/vendors/redhat/heads/gcc-8-branch)] aarch64: Tidy aarch64_split_compare_and_swap X-Act-Checkin: gcc X-Git-Author: Andre Vieira X-Git-Refname: refs/vendors/redhat/heads/gcc-8-branch X-Git-Oldrev: c185c39a5a5a28565a2a5f73ff1f5ecca6855c8a X-Git-Newrev: fd7a5f21ceb5fb6247bd42110e0c158a672b8ed6 Message-Id: <20200917165810.A1A3B398794B@sourceware.org> Date: Thu, 17 Sep 2020 16:58:10 +0000 (GMT) X-BeenThere: gcc-cvs@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-cvs mailing list List-Unsubscribe: , List-Archive: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 17 Sep 2020 16:58:10 -0000 https://gcc.gnu.org/g:fd7a5f21ceb5fb6247bd42110e0c158a672b8ed6 commit fd7a5f21ceb5fb6247bd42110e0c158a672b8ed6 Author: Andre Vieira Date: Thu Apr 16 10:16:13 2020 +0100 aarch64: Tidy aarch64_split_compare_and_swap 2020-04-16 Andre Vieira Backport from mainline. 2019-09-19 Richard Henderson * config/aarch64/aarch64 (aarch64_split_compare_and_swap): Disable strong_zero_p for aarch64_track_speculation; unify some code paths; use aarch64_gen_compare_reg instead of open-coding. Diff: --- gcc/ChangeLog | 11 ++++++++++- gcc/config/aarch64/aarch64.c | 40 +++++++++++++--------------------------- 2 files changed, 23 insertions(+), 28 deletions(-) diff --git a/gcc/ChangeLog b/gcc/ChangeLog index c15d9221b04..666dedef798 100644 --- a/gcc/ChangeLog +++ b/gcc/ChangeLog @@ -1,3 +1,12 @@ +2020-04-16 Andre Vieira + + Backport from mainline. + 2019-09-19 Richard Henderson + + * config/aarch64/aarch64 (aarch64_split_compare_and_swap): Disable + strong_zero_p for aarch64_track_speculation; unify some code paths; + use aarch64_gen_compare_reg instead of open-coding. + 2020-04-16 Andre Vieira Backport from mainline @@ -9,7 +18,7 @@ 2020-04-16 Andre Vieira - Backport from mainline. + Backport from mainline 2019-09-19 Richard Henderson * config/aarch64/aarch64.c (aarch64_gen_compare_reg): Add support diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c index 09e78313489..2df5bf3db97 100644 --- a/gcc/config/aarch64/aarch64.c +++ b/gcc/config/aarch64/aarch64.c @@ -14359,13 +14359,11 @@ aarch64_split_compare_and_swap (rtx operands[]) /* Split after prolog/epilog to avoid interactions with shrinkwrapping. */ gcc_assert (epilogue_completed); - rtx rval, mem, oldval, newval, scratch; + rtx rval, mem, oldval, newval, scratch, x, model_rtx; machine_mode mode; bool is_weak; rtx_code_label *label1, *label2; - rtx x, cond; enum memmodel model; - rtx model_rtx; rval = operands[0]; mem = operands[1]; @@ -14386,7 +14384,7 @@ aarch64_split_compare_and_swap (rtx operands[]) CBNZ scratch, .label1 .label2: CMP rval, 0. */ - bool strong_zero_p = !is_weak && oldval == const0_rtx && mode != TImode; + bool strong_zero_p = (!is_weak && oldval == const0_rtx && mode != TImode); label1 = NULL; if (!is_weak) @@ -14399,26 +14397,20 @@ aarch64_split_compare_and_swap (rtx operands[]) /* The initial load can be relaxed for a __sync operation since a final barrier will be emitted to stop code hoisting. */ if (is_mm_sync (model)) - aarch64_emit_load_exclusive (mode, rval, mem, - GEN_INT (MEMMODEL_RELAXED)); + aarch64_emit_load_exclusive (mode, rval, mem, GEN_INT (MEMMODEL_RELAXED)); else aarch64_emit_load_exclusive (mode, rval, mem, model_rtx); if (strong_zero_p) - { - x = gen_rtx_NE (VOIDmode, rval, const0_rtx); - x = gen_rtx_IF_THEN_ELSE (VOIDmode, x, - gen_rtx_LABEL_REF (Pmode, label2), pc_rtx); - aarch64_emit_unlikely_jump (gen_rtx_SET (pc_rtx, x)); - } + x = gen_rtx_NE (VOIDmode, rval, const0_rtx); else { - cond = aarch64_gen_compare_reg_maybe_ze (NE, rval, oldval, mode); - x = gen_rtx_NE (VOIDmode, cond, const0_rtx); - x = gen_rtx_IF_THEN_ELSE (VOIDmode, x, - gen_rtx_LABEL_REF (Pmode, label2), pc_rtx); - aarch64_emit_unlikely_jump (gen_rtx_SET (pc_rtx, x)); + rtx cc_reg = aarch64_gen_compare_reg_maybe_ze (NE, rval, oldval, mode); + x = gen_rtx_NE (VOIDmode, cc_reg, const0_rtx); } + x = gen_rtx_IF_THEN_ELSE (VOIDmode, x, + gen_rtx_LABEL_REF (Pmode, label2), pc_rtx); + aarch64_emit_unlikely_jump (gen_rtx_SET (pc_rtx, x)); aarch64_emit_store_exclusive (mode, scratch, mem, newval, model_rtx); @@ -14430,22 +14422,16 @@ aarch64_split_compare_and_swap (rtx operands[]) aarch64_emit_unlikely_jump (gen_rtx_SET (pc_rtx, x)); } else - { - cond = gen_rtx_REG (CCmode, CC_REGNUM); - x = gen_rtx_COMPARE (CCmode, scratch, const0_rtx); - emit_insn (gen_rtx_SET (cond, x)); - } + aarch64_gen_compare_reg (NE, scratch, const0_rtx); emit_label (label2); + /* If we used a CBNZ in the exchange loop emit an explicit compare with RVAL to set the condition flags. If this is not used it will be removed by later passes. */ if (strong_zero_p) - { - cond = gen_rtx_REG (CCmode, CC_REGNUM); - x = gen_rtx_COMPARE (CCmode, rval, const0_rtx); - emit_insn (gen_rtx_SET (cond, x)); - } + aarch64_gen_compare_reg (NE, rval, const0_rtx); + /* Emit any final barrier needed for a __sync operation. */ if (is_mm_sync (model)) aarch64_emit_post_barrier (model);