From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 51765 invoked by alias); 15 Apr 2015 11:05:49 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 51754 invoked by uid 89); 15 Apr 2015 11:05:48 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00,SPF_PASS,T_RP_MATCHES_RCVD autolearn=ham version=3.3.2 X-HELO: cam-smtp0.cambridge.arm.com Received: from fw-tnat.cambridge.arm.com (HELO cam-smtp0.cambridge.arm.com) (217.140.96.140) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES256-SHA encrypted) ESMTPS; Wed, 15 Apr 2015 11:05:47 +0000 Received: from arm.com (e106375-lin.cambridge.arm.com [10.2.206.37]) by cam-smtp0.cambridge.arm.com (8.13.8/8.13.8) with ESMTP id t3FB5cRG028098; Wed, 15 Apr 2015 12:05:39 +0100 Date: Wed, 15 Apr 2015 11:05:00 -0000 From: James Greenhalgh To: Kyrill Tkachov Cc: Kugan , "gcc-patches@gcc.gnu.org" , Marcus Shawcroft , Richard Earnshaw , Jim Wilson Subject: Re: [AArch64][PR65375] Fix RTX cost for vector SET Message-ID: <20150415110538.GA22143@arm.com> References: <55066BCC.4010900@linaro.org> <5506AA24.3050108@arm.com> <5506CD7A.7030109@linaro.org> <5506D77B.5060909@linaro.org> <55070972.3000800@arm.com> <5507813E.3060106@linaro.org> <5513B390.2030201@linaro.org> <552D8FF7.5000105@linaro.org> <20150415092509.GA20852@arm.com> <552E39F3.5030102@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <552E39F3.5030102@arm.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-IsSubscribed: yes X-SW-Source: 2015-04/txt/msg00742.txt.bz2 On Wed, Apr 15, 2015 at 11:14:11AM +0100, Kyrill Tkachov wrote: > > On 15/04/15 10:25, James Greenhalgh wrote: > > On Tue, Apr 14, 2015 at 11:08:55PM +0100, Kugan wrote: > >> Now that Stage1 is open, is this OK for trunk. > > Hi Kugan, > > > >> diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c > >> index cba3c1a..d6ad0af 100644 > >> --- a/gcc/config/aarch64/aarch64.c > >> +++ b/gcc/config/aarch64/aarch64.c > >> @@ -5544,10 +5544,17 @@ aarch64_rtx_costs (rtx x, int code, int outer ATTRIBUTE_UNUSED, > >> > >> /* Fall through. */ > >> case REG: > >> + if (VECTOR_MODE_P (GET_MODE (op0)) && REG_P (op1)) > >> + { > >> + /* The cost is 1 per vector-register copied. */ > >> + int n_minus_1 = (GET_MODE_SIZE (GET_MODE (op0)) - 1) > >> + / GET_MODE_SIZE (V4SImode); > >> + *cost = COSTS_N_INSNS (n_minus_1 + 1); > >> + } > >> /* const0_rtx is in general free, but we will use an > >> instruction to set a register to 0. */ > >> - if (REG_P (op1) || op1 == const0_rtx) > >> - { > >> + else if (REG_P (op1) || op1 == const0_rtx) > >> + { > >> /* The cost is 1 per register copied. */ > >> int n_minus_1 = (GET_MODE_SIZE (GET_MODE (op0)) - 1) > >> / UNITS_PER_WORD; > > I would not have expected control flow to reach this point, as we have: > > > >> /* TODO: The cost infrastructure currently does not handle > >> vector operations. Assume that all vector operations > >> are equally expensive. */ > >> if (VECTOR_MODE_P (mode)) > >> { > >> if (speed) > >> *cost += extra_cost->vect.alu; > >> return true; > >> } > > But, I see that this check is broken for a set RTX (which has no mode). > > So, your patch works, but only due to a bug in my original implementation. > > This leaves the code with quite a messy design. > > > > There are two ways I see that we could clean things up, both of which > > require some reworking of your patch. > > > > Either we remove my check above and teach the RTX costs how to properly > > cost vector operations, or we fix my check to catch all vector RTX > > and add the special cases for the small subset of things we understand > > up there. > > > > The correct approach in the long term is to fix the RTX costs to correctly > > understand vector operations, so I'd much prefer to see a patch along > > these lines, though I appreciate that is a substantially more invasive > > piece of work. > > > Would we want to catch all vector RTXes in that check at the top > and have special vector rtx handling there? (Perhaps even in a function > of its own like aarch64_vector_rtx_costs?). No, I think this would necessitate duplicating all of the idiom recognition and RTX walking code from aarch64_rtx_costs. However, this would be the easiest way to fix this PR in the short term. > Or do you think it would be cleaner to handle them in the aarch64_rtx_costs > giant switch? Vector-specific RTX codes like vec_concat, vec_select would > integrate cleanly, but handling other common rtxen could potentially be > messy? Well, if I'm allowed to dream for a bit... To reduce the need for spaghetti code a little, what I would really like to see is a logical split between the recognition of the instruction and the costing of individual modes of that instruction. So we would invent a function like aarch64_classify_rtx which would return "You gave me something which looks like an add immediate" - then we would leave switching on modes to aarch64_rtx_costs. If I can dream even more - I don't see why it makes sense for us to have a hand-rolled instruction recognizer in the back-end and I'd like to find a way to resuse common recog infrastructure, and then add something like what sched1 does to guess at likely register allocations and to then extract the type attribute. For that to work, we would need to change a huge amount of infrastructure to ensure that a register allocation guess was available whenever someone wanted a cost estimate - a huge, huge problem when a Gimple pass speculatively forms some invalid RTX and hands it off to rtx_costs. So I think this is not a realistic plan! Those are huge refactoring tasks which I'm not going to get a chance to look at any time soon, so I think we have to be pragmatic about what can be achieved. Adding to the common RTX recognisers will potentially be messy, but it is a neater approach than duplicating the logic (have a look at the amount of effort we go to to spot a non-fused Multiply Add operation - we certainly don't want to duplicate that out for vectors). Thanks, James