From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 93450 invoked by alias); 7 Sep 2016 19:50:52 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 93426 invoked by uid 89); 7 Sep 2016 19:50:52 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00,RCVD_IN_DNSWL_LOW,SPF_PASS autolearn=ham version=3.3.2 spammy=H*i:sk:d73a03d, H*f:sk:d73a03d, 5000 X-HELO: mail-qk0-f174.google.com Received: from mail-qk0-f174.google.com (HELO mail-qk0-f174.google.com) (209.85.220.174) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Wed, 07 Sep 2016 19:50:41 +0000 Received: by mail-qk0-f174.google.com with SMTP id w204so22737829qka.0 for ; Wed, 07 Sep 2016 12:50:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=WWcz4kByx1ZrFa28vgNclJLvTNApnJsWAZxl8nu1Nsw=; b=AzG+rwIPM9m23+83IiBo99lEuBJxnFt6zme2Wb5i0/iPvNcqtJV0xXlcyQtqMGSwCW 2jGeXy8I+8EoTqOd8Ai6Ob/nxnAsFVXGGCL1l0QgiPHSU6JUvKx1nWdNDk3EvE/QLvHv fNyjOOlDD76UPHttKE0H0AJTmUzvEw8jg1POOQJJHC1ctXGam3yfyOBV5Wg70Pd/96d9 M81Ddz+oo283LAhG3+zW8mpgzxPwwf7c2O54/Ij7Aq8jeez8fop+SY/B30S0AGLd2POm MFNW2T5HYpQ38c+YFiofb/ae6+Rq45j+p3V/2RfgKAoVcxj0nbB9Chh8O2+7itmZxZEZ SzMA== X-Gm-Message-State: AE9vXwPPfL3E1hKYrDo9TrL8CxrU2t9SBR9dHk2W9grL8uX9E+H1VH3tOcrVp56uqw2SgDtLWXV56I9ux+D2zcEL X-Received: by 10.55.11.137 with SMTP id 131mr46631480qkl.65.1473277840176; Wed, 07 Sep 2016 12:50:40 -0700 (PDT) MIME-Version: 1.0 Received: by 10.140.21.102 with HTTP; Wed, 7 Sep 2016 12:50:39 -0700 (PDT) In-Reply-To: References: From: Christophe Lyon Date: Wed, 07 Sep 2016 20:04:00 -0000 Message-ID: Subject: Re: [PATCH][AArch64] Improve legitimize_address To: "Richard Earnshaw (lists)" Cc: Wilco Dijkstra , GCC Patches , nd Content-Type: text/plain; charset=UTF-8 X-IsSubscribed: yes X-SW-Source: 2016-09/txt/msg00402.txt.bz2 Hi Wilco, On 7 September 2016 at 14:43, Richard Earnshaw (lists) wrote: > On 06/09/16 14:14, Wilco Dijkstra wrote: >> Improve aarch64_legitimize_address - avoid splitting the offset if it is >> supported. When we do split, take the mode size into account. BLKmode >> falls into the unaligned case but should be treated like LDP/STP. >> This improves codesize slightly due to fewer base address calculations: >> >> int f(int *p) { return p[5000] + p[7000]; } >> >> Now generates: >> >> f: >> add x0, x0, 16384 >> ldr w1, [x0, 3616] >> ldr w0, [x0, 11616] >> add w0, w1, w0 >> ret >> >> instead of: >> >> f: >> add x1, x0, 16384 >> add x0, x0, 24576 >> ldr w1, [x1, 3616] >> ldr w0, [x0, 3424] >> add w0, w1, w0 >> ret >> >> OK for trunk? >> >> ChangeLog: >> 2016-09-06 Wilco Dijkstra >> >> gcc/ >> * config/aarch64/aarch64.c (aarch64_legitimize_address): >> Avoid use of base_offset if offset already in range. > > OK. > > R. After this patch, I've noticed a regression: FAIL: gcc.target/aarch64/ldp_vec_64_1.c scan-assembler ldp\td[0-9]+, d[0-9] You probably need to adjust the scan pattern. Thanks, Christophe > >> -- >> >> diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c >> index 27bbdbad8cddc576f9ed4fd0670116bd6d318412..119ff0aecb0c9f88899fa141b2c7f9158281f9c3 100644 >> --- a/gcc/config/aarch64/aarch64.c >> +++ b/gcc/config/aarch64/aarch64.c >> @@ -5058,9 +5058,19 @@ aarch64_legitimize_address (rtx x, rtx /* orig_x */, machine_mode mode) >> /* For offsets aren't a multiple of the access size, the limit is >> -256...255. */ >> else if (offset & (GET_MODE_SIZE (mode) - 1)) >> - base_offset = (offset + 0x100) & ~0x1ff; >> + { >> + base_offset = (offset + 0x100) & ~0x1ff; >> + >> + /* BLKmode typically uses LDP of X-registers. */ >> + if (mode == BLKmode) >> + base_offset = (offset + 512) & ~0x3ff; >> + } >> + /* Small negative offsets are supported. */ >> + else if (IN_RANGE (offset, -256, 0)) >> + base_offset = 0; >> + /* Use 12-bit offset by access size. */ >> else >> - base_offset = offset & ~0xfff; >> + base_offset = offset & (~0xfff * GET_MODE_SIZE (mode)); >> >> if (base_offset != 0) >> { >> >