From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 84094 invoked by alias); 7 Aug 2017 20:44:50 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 83952 invoked by uid 89); 7 Aug 2017 20:44:49 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.6 required=5.0 tests=AWL,BAYES_00,FREEMAIL_FROM,RCVD_IN_DNSWL_NONE,RCVD_IN_SORBS_SPAM,SPF_PASS autolearn=no version=3.3.2 spammy=sk:Michael X-HELO: mail-yw0-f196.google.com Received: from mail-yw0-f196.google.com (HELO mail-yw0-f196.google.com) (209.85.161.196) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Mon, 07 Aug 2017 20:44:44 +0000 Received: by mail-yw0-f196.google.com with SMTP id l82so999780ywc.2 for ; Mon, 07 Aug 2017 13:44:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=50q35UeOa5/W9mPXQOXa6p5w5uZzcU0lDDOrmBW92Pc=; b=Uqt8NRSGzYAC4/nO4SPeGoWo0M8urSNv92Sp0loBy1tBcqqPxFk+ZuoaA8KCqCBIdk F7qA/FdwsrWAYvJ/xQP6gTMvaWymw1eCySU0NIxQetEsPq0f/vCapnaujUCGRd8BJCWh XJCa8tK8Co5iVmdp+lRv7UeNuY3e9lkpbcXh1tx0XV77TGOhmiWXFPAieXFUBEYJXTSV ATNyVgjc5bjKIE62CQckaoBZxz/3Mzvh1rNOmiik1orjEPM49GqSEE7qWSCMNRFPF52W raq46MoeYM3BxXyjG99x76pcRrxMVlev8dZtZJX9DCgYFZgWaxX0go8DutWb8YRWVh+B oPMg== X-Gm-Message-State: AHYfb5ioxfRYduN/nNVaB7/qpQ+lw8nYIz55H0Y6+cNsGaQ1mhB9NXvJ KMKX4goBu6LI5iv9d7sgBrOh9yVNPw== X-Received: by 10.129.170.72 with SMTP id z8mr1538884ywk.74.1502138682279; Mon, 07 Aug 2017 13:44:42 -0700 (PDT) MIME-Version: 1.0 Received: by 10.129.84.85 with HTTP; Mon, 7 Aug 2017 13:44:41 -0700 (PDT) In-Reply-To: References: From: Andrew Pinski Date: Mon, 07 Aug 2017 20:44:00 -0000 Message-ID: Subject: Re: [PATCH] [Aarch64] Optimize subtract in shift counts To: Michael Collison Cc: "gcc-patches@gcc.gnu.org" , nd Content-Type: text/plain; charset="UTF-8" X-IsSubscribed: yes X-SW-Source: 2017-08/txt/msg00546.txt.bz2 On Mon, Aug 7, 2017 at 1:36 PM, Michael Collison wrote: > This patch improves code generation for shifts with subtract instructions where the first operand to the subtract is equal to the bit-size of the operation. > > > long f1(long x, int i) > { > return x >> (64 - i); > } > > int f2(int x, int i) > { > return x << (32 - i); > } > > > With trunk at -O2 we generate: > > f1: > mov w2, 64 > sub w1, w2, w1 > asr x0, x0, x1 > ret > > f2: > mov w2, 32 > sub w1, w2, w1 > lsl w0, w0, w1 > ret > > with the patch we generate: > > f1: > neg w2, w1 > asr x0, x0, x2 > ret > .size f1, .-f1 > .align 2 > .p2align 3,,7 > .global f2 > .type f2, %function > f2: > neg w2, w1 > lsl w0, w0, w2 > ret > > Okay for trunk? Shouldn't this be handled in simplify-rtx instead of an aarch64 specific pattern? That is simplify: (SHIFT A (32 - B)) -> (SHIFT A (AND (NEG B) 31)) etc. or maybe not. I don't mind either way after thinking about it more. Thanks, Andrew > > 2017-08-07 Michael Collison > > * config/aarch64/aarch64.md (*aarch64_reg__minus3): > New pattern. > > 2016-08-07 Michael Collison > > * gcc.target/aarch64/var_shift_mask_2.c: New test.