From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 37349 invoked by alias); 27 Oct 2017 13:23:39 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 35075 invoked by uid 89); 27 Oct 2017 13:23:38 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-11.2 required=5.0 tests=AWL,BAYES_00,GIT_PATCH_2,GIT_PATCH_3,RCVD_IN_DNSWL_NONE,RCVD_IN_SORBS_SPAM,SPF_PASS autolearn=ham version=3.3.2 spammy= X-HELO: mail-wm0-f51.google.com Received: from mail-wm0-f51.google.com (HELO mail-wm0-f51.google.com) (74.125.82.51) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Fri, 27 Oct 2017 13:23:36 +0000 Received: by mail-wm0-f51.google.com with SMTP id t139so3821580wmt.1 for ; Fri, 27 Oct 2017 06:23:36 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:mail-followup-to:cc:subject:references :date:in-reply-to:message-id:user-agent:mime-version; bh=ShKrOcKBQWIMQ59KZ5cyJCPKTFH3NreFaXmS/lP8SiQ=; b=dW3QvfZK8XRsB0oZRDlPvyFnJEZu71qJIJGNU5JHH2G5/cvAunkaYNweL77aXQCiM2 T4pq+PLD4+nPuIHtNEG0l5CJpVMOwOKCufOaU4F0HIv64GvV+YDXtNgzOehynxhT27JG HSKXrPyBQ346565HfCnZO9U3RtaedbMGuRORjRBrQnwjouq08PlC8Kfiqbw0hbGU+PWS K7CCNsq6JoVqlPsI3SqP1jI0/IFjW1T/+wYmcM10Jd2oLBXKSoQw1IzWJR0Yu5hPxJcO q7/f2lkAjugJPZogbkIdq//EYy/ovY7Pzq2qVW20qKPE0z2nXv2ncl1l0hDm3G7hZFjJ Ud+A== X-Gm-Message-State: AMCzsaVmqD4SMPSSDRPO/Dxh4HFP4qgtDuHA9HyCCo78GbK1PyFQrmT8 gA8c2C2WjGEJx+hFpw4brxlfUA== X-Google-Smtp-Source: ABhQp+Ryd0+/97FJueCQfPB6cBlvC93wLeK3S46no2piZwA8r1EuUVK3QRkO/0wmKRj7ERZ6pserSg== X-Received: by 10.28.114.17 with SMTP id n17mr415895wmc.57.1509110614387; Fri, 27 Oct 2017 06:23:34 -0700 (PDT) Received: from localhost (188.29.164.51.threembb.co.uk. [188.29.164.51]) by smtp.gmail.com with ESMTPSA id h185sm3721293wma.19.2017.10.27.06.23.32 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 27 Oct 2017 06:23:33 -0700 (PDT) From: Richard Sandiford To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org,richard.earnshaw@arm.com, james.greenhalgh@arm.com, marcus.shawcroft@arm.com, richard.sandiford@linaro.org Cc: richard.earnshaw@arm.com, james.greenhalgh@arm.com, marcus.shawcroft@arm.com Subject: [02/nn] [AArch64] Move code around References: <873764d8y3.fsf@linaro.org> Date: Fri, 27 Oct 2017 13:25:00 -0000 In-Reply-To: <873764d8y3.fsf@linaro.org> (Richard Sandiford's message of "Fri, 27 Oct 2017 14:19:48 +0100") Message-ID: <87r2tobu7h.fsf@linaro.org> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.2 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain X-SW-Source: 2017-10/txt/msg02066.txt.bz2 This patch simply moves code around, in order to make the later patches easier to read, and to avoid forward declarations. It doesn't add the missing function comments because the interfaces will change in a later patch. 2017-10-26 Richard Sandiford Alan Hayward David Sherwood gcc/ * config/aarch64/aarch64.c (aarch64_add_constant_internal) (aarch64_add_constant, aarch64_add_sp, aarch64_sub_sp): Move earlier in file. Index: gcc/config/aarch64/aarch64.c =================================================================== --- gcc/config/aarch64/aarch64.c 2017-10-27 14:10:14.622293803 +0100 +++ gcc/config/aarch64/aarch64.c 2017-10-27 14:10:17.740863052 +0100 @@ -1966,6 +1966,87 @@ aarch64_internal_mov_immediate (rtx dest return num_insns; } +/* Add DELTA to REGNUM in mode MODE. SCRATCHREG can be used to hold a + temporary value if necessary. FRAME_RELATED_P should be true if + the RTX_FRAME_RELATED flag should be set and CFA adjustments added + to the generated instructions. If SCRATCHREG is known to hold + abs (delta), EMIT_MOVE_IMM can be set to false to avoid emitting the + immediate again. + + Since this function may be used to adjust the stack pointer, we must + ensure that it cannot cause transient stack deallocation (for example + by first incrementing SP and then decrementing when adjusting by a + large immediate). */ + +static void +aarch64_add_constant_internal (scalar_int_mode mode, int regnum, + int scratchreg, HOST_WIDE_INT delta, + bool frame_related_p, bool emit_move_imm) +{ + HOST_WIDE_INT mdelta = abs_hwi (delta); + rtx this_rtx = gen_rtx_REG (mode, regnum); + rtx_insn *insn; + + if (!mdelta) + return; + + /* Single instruction adjustment. */ + if (aarch64_uimm12_shift (mdelta)) + { + insn = emit_insn (gen_add2_insn (this_rtx, GEN_INT (delta))); + RTX_FRAME_RELATED_P (insn) = frame_related_p; + return; + } + + /* Emit 2 additions/subtractions if the adjustment is less than 24 bits. + Only do this if mdelta is not a 16-bit move as adjusting using a move + is better. */ + if (mdelta < 0x1000000 && !aarch64_move_imm (mdelta, mode)) + { + HOST_WIDE_INT low_off = mdelta & 0xfff; + + low_off = delta < 0 ? -low_off : low_off; + insn = emit_insn (gen_add2_insn (this_rtx, GEN_INT (low_off))); + RTX_FRAME_RELATED_P (insn) = frame_related_p; + insn = emit_insn (gen_add2_insn (this_rtx, GEN_INT (delta - low_off))); + RTX_FRAME_RELATED_P (insn) = frame_related_p; + return; + } + + /* Emit a move immediate if required and an addition/subtraction. */ + rtx scratch_rtx = gen_rtx_REG (mode, scratchreg); + if (emit_move_imm) + aarch64_internal_mov_immediate (scratch_rtx, GEN_INT (mdelta), true, mode); + insn = emit_insn (delta < 0 ? gen_sub2_insn (this_rtx, scratch_rtx) + : gen_add2_insn (this_rtx, scratch_rtx)); + if (frame_related_p) + { + RTX_FRAME_RELATED_P (insn) = frame_related_p; + rtx adj = plus_constant (mode, this_rtx, delta); + add_reg_note (insn , REG_CFA_ADJUST_CFA, gen_rtx_SET (this_rtx, adj)); + } +} + +static inline void +aarch64_add_constant (scalar_int_mode mode, int regnum, int scratchreg, + HOST_WIDE_INT delta) +{ + aarch64_add_constant_internal (mode, regnum, scratchreg, delta, false, true); +} + +static inline void +aarch64_add_sp (int scratchreg, HOST_WIDE_INT delta, bool emit_move_imm) +{ + aarch64_add_constant_internal (Pmode, SP_REGNUM, scratchreg, delta, + true, emit_move_imm); +} + +static inline void +aarch64_sub_sp (int scratchreg, HOST_WIDE_INT delta, bool frame_related_p) +{ + aarch64_add_constant_internal (Pmode, SP_REGNUM, scratchreg, -delta, + frame_related_p, true); +} void aarch64_expand_mov_immediate (rtx dest, rtx imm) @@ -2077,88 +2158,6 @@ aarch64_expand_mov_immediate (rtx dest, as_a (mode)); } -/* Add DELTA to REGNUM in mode MODE. SCRATCHREG can be used to hold a - temporary value if necessary. FRAME_RELATED_P should be true if - the RTX_FRAME_RELATED flag should be set and CFA adjustments added - to the generated instructions. If SCRATCHREG is known to hold - abs (delta), EMIT_MOVE_IMM can be set to false to avoid emitting the - immediate again. - - Since this function may be used to adjust the stack pointer, we must - ensure that it cannot cause transient stack deallocation (for example - by first incrementing SP and then decrementing when adjusting by a - large immediate). */ - -static void -aarch64_add_constant_internal (scalar_int_mode mode, int regnum, - int scratchreg, HOST_WIDE_INT delta, - bool frame_related_p, bool emit_move_imm) -{ - HOST_WIDE_INT mdelta = abs_hwi (delta); - rtx this_rtx = gen_rtx_REG (mode, regnum); - rtx_insn *insn; - - if (!mdelta) - return; - - /* Single instruction adjustment. */ - if (aarch64_uimm12_shift (mdelta)) - { - insn = emit_insn (gen_add2_insn (this_rtx, GEN_INT (delta))); - RTX_FRAME_RELATED_P (insn) = frame_related_p; - return; - } - - /* Emit 2 additions/subtractions if the adjustment is less than 24 bits. - Only do this if mdelta is not a 16-bit move as adjusting using a move - is better. */ - if (mdelta < 0x1000000 && !aarch64_move_imm (mdelta, mode)) - { - HOST_WIDE_INT low_off = mdelta & 0xfff; - - low_off = delta < 0 ? -low_off : low_off; - insn = emit_insn (gen_add2_insn (this_rtx, GEN_INT (low_off))); - RTX_FRAME_RELATED_P (insn) = frame_related_p; - insn = emit_insn (gen_add2_insn (this_rtx, GEN_INT (delta - low_off))); - RTX_FRAME_RELATED_P (insn) = frame_related_p; - return; - } - - /* Emit a move immediate if required and an addition/subtraction. */ - rtx scratch_rtx = gen_rtx_REG (mode, scratchreg); - if (emit_move_imm) - aarch64_internal_mov_immediate (scratch_rtx, GEN_INT (mdelta), true, mode); - insn = emit_insn (delta < 0 ? gen_sub2_insn (this_rtx, scratch_rtx) - : gen_add2_insn (this_rtx, scratch_rtx)); - if (frame_related_p) - { - RTX_FRAME_RELATED_P (insn) = frame_related_p; - rtx adj = plus_constant (mode, this_rtx, delta); - add_reg_note (insn , REG_CFA_ADJUST_CFA, gen_rtx_SET (this_rtx, adj)); - } -} - -static inline void -aarch64_add_constant (scalar_int_mode mode, int regnum, int scratchreg, - HOST_WIDE_INT delta) -{ - aarch64_add_constant_internal (mode, regnum, scratchreg, delta, false, true); -} - -static inline void -aarch64_add_sp (int scratchreg, HOST_WIDE_INT delta, bool emit_move_imm) -{ - aarch64_add_constant_internal (Pmode, SP_REGNUM, scratchreg, delta, - true, emit_move_imm); -} - -static inline void -aarch64_sub_sp (int scratchreg, HOST_WIDE_INT delta, bool frame_related_p) -{ - aarch64_add_constant_internal (Pmode, SP_REGNUM, scratchreg, -delta, - frame_related_p, true); -} - static bool aarch64_function_ok_for_sibcall (tree decl ATTRIBUTE_UNUSED, tree exp ATTRIBUTE_UNUSED)