From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 92731 invoked by alias); 24 Nov 2015 09:56:19 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 92642 invoked by uid 89); 24 Nov 2015 09:56:18 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.1 required=5.0 tests=AWL,BAYES_00,KAM_LAZY_DOMAIN_SECURITY,RP_MATCHES_RCVD autolearn=no version=3.3.2 X-HELO: foss.arm.com Received: from foss.arm.com (HELO foss.arm.com) (217.140.101.70) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Tue, 24 Nov 2015 09:56:15 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B776349; Tue, 24 Nov 2015 01:55:56 -0800 (PST) Received: from e105689-lin.cambridge.arm.com (e105689-lin.cambridge.arm.com [10.2.207.32]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 104BC3F308; Tue, 24 Nov 2015 01:56:12 -0800 (PST) Subject: Re: [PATCH AArch64]Handle REG+REG+CONST and REG+NON_REG+CONST in legitimize address To: "Bin.Cheng" References: <000001d12119$49548570$dbfd9050$@arm.com> <20151117100800.GA6727@arm.com> <564F5ABF.2020302@foss.arm.com> Cc: James Greenhalgh , Bin Cheng , gcc-patches List From: Richard Earnshaw X-Enigmail-Draft-Status: N1110 Message-ID: <5654343A.2080609@foss.arm.com> Date: Tue, 24 Nov 2015 09:59:00 -0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.3.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-IsSubscribed: yes X-SW-Source: 2015-11/txt/msg02856.txt.bz2 On 24/11/15 02:51, Bin.Cheng wrote: >>> The aarch64's problem is we don't define addptr3 pattern, and we don't >>> >> have direct insn pattern describing the "x + y << z". According to >>> >> gcc internal: >>> >> >>> >> ‘addptrm3’ >>> >> Like addm3 but is guaranteed to only be used for address calculations. >>> >> The expanded code is not allowed to clobber the condition code. It >>> >> only needs to be defined if addm3 sets the condition code. >> > >> > addm3 on aarch64 does not set the condition codes, so by this rule we >> > shouldn't need to define this pattern. > Hi Richard, > I think that rule has a prerequisite that backend needs to support > register shifted addition in addm3 pattern. addm3 is a named pattern and its format is well defined. It does not take a shifted operand and never has. > Apparently for AArch64, > addm3 only supports "reg+reg" or "reg+imm". Also we don't really > "does not set the condition codes" actually, because both > "adds_shift_imm_*" and "adds_mul_imm_*" do set the condition flags. You appear to be confusing named patterns (used by expand) with recognizers. Anyway, we have (define_insn "*add__" [(set (match_operand:GPI 0 "register_operand" "=r") (plus:GPI (ASHIFT:GPI (match_operand:GPI 1 "register_operand" "r") (match_operand:QI 2 "aarch64_shift_imm_" "n")) (match_operand:GPI 3 "register_operand" "r")))] Which is a non-flag setting add with shifted operand. > Either way I think it is another backend issue, so do you approve that > I commit this patch now? Not yet. I think there's something fundamental amiss here. BTW, it looks to me as though addptr3 should have exactly the same operand rules as add3 (documentation reads "like add3"), so a shifted operand shouldn't be supported there either. If that isn't the case then that should be clearly called out in the documentation. R.