From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com [IPv6:2a00:1450:4864:20::634]) by sourceware.org (Postfix) with ESMTPS id 738A23858C2D for ; Tue, 27 Sep 2022 12:40:18 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 738A23858C2D Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-ej1-x634.google.com with SMTP id 13so20414506ejn.3 for ; Tue, 27 Sep 2022 05:40:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date; bh=F+shxkI3mEYe0K8aW62AAAIqu66WOMdZKhU39oExHfo=; b=F38puV7lrGdo3NRu8ClK2u2hOg1cm3J9ViiSFUkHUt1zKKFNSoMnkcU5wh8g0k7XO4 b1ALe5NwLRpQsV4L1A896SOp+xUA5B7AmVBxKqVS9vXNv2UBXJh8dH2D0+7bHLzbEgGK 9caQnuDaFvru5s/PCfzYCz0C1LI+DFQTdDPOO/FbzQNg6T1EURML9D1NiXnHA5vyb7bt fcOBzM8Htjaz2VLdGmBLriKvTd2/pSUjBhpBDhUu7bL1GIhpukPYGDb5bKhtX3W6hAja EgoyLLPqOv0NiF11tJ4075TfkEAoLvioTSIv2ADZ8/BFQFcBrB6UEyb1SFHtWV2RUOYU 4z/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date; bh=F+shxkI3mEYe0K8aW62AAAIqu66WOMdZKhU39oExHfo=; b=0mSR+GcWisX6kuJThKoukbSFPjMoK1uCNfsHAe95lQSNp2qa0GjfGPktWPWgk0kLZ/ f4PJA6Nj1jCSLPiWAyZX/Y5m5qad2uN/9dwBwLGwWTb8cHkMf16vNUX5WSXTIq0RsacA ygpE3XM1mDklAua2IBJSgWO0vn6t6dcAlLYO96Yl+JKzy7zMocLCpu2qAf16w2MRsUO0 xk54AQfUNMXq8bf+91SUqhlrt3Q3S2+9LeIJZZ1YhZ5IarVlsMypHrHM8FyNODaJOyNc TU8zpWvaPE5/qLz5EgIOe+yAlBUOVLeXsgDt9lUM6Ez5lZ+r0L5yx8zm+X3zpfIZvxdQ 2bRw== X-Gm-Message-State: ACrzQf2K3TOIAsTf/1hildjf3yLFJ7FNAEA1e7MPhDeHYLL8GxCJOm39 12mU/mXkEvllfRM3tW3EAAjegab43NMnY7ebcgw= X-Google-Smtp-Source: AMsMyM4qCu9gDMNVudQzlv+Bfw8WMAAdQgCcar7PFafZFYsUvcfvRZ5gi1AozPo5FCOTP1xBgTU7M7UBUd15MXU8cKk= X-Received: by 2002:a17:907:968f:b0:782:6a9d:33fb with SMTP id hd15-20020a170907968f00b007826a9d33fbmr20952971ejc.754.1664282417134; Tue, 27 Sep 2022 05:40:17 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Richard Biener Date: Tue, 27 Sep 2022 14:40:05 +0200 Message-ID: Subject: Re: [PATCH 1/2]middle-end Fold BIT_FIELD_REF and Shifts into BIT_FIELD_REFs alone To: Andrew Pinski Cc: Tamar Christina , "rguenther@suse.de" , nd , "gcc-patches@gcc.gnu.org" Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-8.1 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,GIT_PATCH_0,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: On Mon, Sep 26, 2022 at 5:25 PM Andrew Pinski via Gcc-patches wrote: > > On Sun, Sep 25, 2022 at 9:56 PM Tamar Christina wrote: > > > > > -----Original Message----- > > > From: Andrew Pinski > > > Sent: Saturday, September 24, 2022 8:57 PM > > > To: Tamar Christina > > > Cc: gcc-patches@gcc.gnu.org; nd ; rguenther@suse.de > > > Subject: Re: [PATCH 1/2]middle-end Fold BIT_FIELD_REF and Shifts into > > > BIT_FIELD_REFs alone > > > > > > On Fri, Sep 23, 2022 at 4:43 AM Tamar Christina via Gcc-patches > > patches@gcc.gnu.org> wrote: > > > > > > > > Hi All, > > > > > > > > This adds a match.pd rule that can fold right shifts and > > > > bit_field_refs of integers into just a bit_field_ref by adjusting the > > > > offset and the size of the extract and adds an extend to the previous size. > > > > > > > > Concretely turns: > > > > > > > > #include > > > > > > > > unsigned int foor (uint32x4_t x) > > > > { > > > > return x[1] >> 16; > > > > } > > > > > > > > which used to generate: > > > > > > > > _1 = BIT_FIELD_REF ; > > > > _3 = _1 >> 16; > > > > > > > > into > > > > > > > > _4 = BIT_FIELD_REF ; > > > > _2 = (unsigned int) _4; > > > > > > > > I currently limit the rewrite to only doing it if the resulting > > > > extract is in a mode the target supports. i.e. it won't rewrite it to > > > > extract say 13-bits because I worry that for targets that won't have a > > > > bitfield extract instruction this may be a de-optimization. > > > > > > It is only a de-optimization for the following case: > > > * vector extraction > > > > > > All other cases should be handled correctly in the middle-end when > > > expanding to RTL because they need to be handled for bit-fields anyways. > > > Plus SIGN_EXTRACT and ZERO_EXTRACT would be used in the integer case > > > for the RTL. > > > Getting SIGN_EXTRACT/ZERO_EXTRACT early on in the RTL is better than > > > waiting until combine really. > > > > > > > Fair enough, I've dropped the constraint. > > Well the constraint should be done still for VECTOR_TYPE I think. > Attached is what I had done for left shift for integer types. > Note the BYTES_BIG_ENDIAN part which you missed for the right shift case. Note we formerly had BIT_FIELD_REF_UNSIGNED and allowed the precision of the TREE_TYPE of the BIT_FIELD_REF to not match the extracted size. That might have mapped directly to zero/sign_extract. Now that this is no more we should think of a canonical way to express this and make sure we can synthesize those early. Richard. > Thanks, > Andrew Pinski > > > > > > > > > > > > > > Bootstrapped Regtested on aarch64-none-linux-gnu, x86_64-pc-linux-gnu > > > > and no issues. > > > > > > > > Testcase are added in patch 2/2. > > > > > > > > Ok for master? > > > > > > > > Thanks, > > > > Tamar > > > > > > > > gcc/ChangeLog: > > > > > > > > * match.pd: Add bitfield and shift folding. > > > > > > > > --- inline copy of patch -- > > > > diff --git a/gcc/match.pd b/gcc/match.pd index > > > > > > > 1d407414bee278c64c00d425d9f025c1c58d853d..b225d36dc758f1581502c8d03 > > > 761 > > > > 544bfd499c01 100644 > > > > --- a/gcc/match.pd > > > > +++ b/gcc/match.pd > > > > @@ -7245,6 +7245,23 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT) > > > > && ANY_INTEGRAL_TYPE_P (type) && ANY_INTEGRAL_TYPE_P > > > (TREE_TYPE(@0))) > > > > (IFN_REDUC_PLUS_WIDEN @0))) > > > > > > > > +/* Canonicalize BIT_FIELD_REFS and shifts to BIT_FIELD_REFS. */ (for > > > > +shift (rshift) > > > > + op (plus) > > > > + (simplify > > > > + (shift (BIT_FIELD_REF @0 @1 @2) integer_pow2p@3) > > > > + (if (INTEGRAL_TYPE_P (type)) > > > > + (with { /* Can't use wide-int here as the precision differs between > > > > + @1 and @3. */ > > > > + unsigned HOST_WIDE_INT size = tree_to_uhwi (@1); > > > > + unsigned HOST_WIDE_INT shiftc = tree_to_uhwi (@3); > > > > + unsigned HOST_WIDE_INT newsize = size - shiftc; > > > > + tree nsize = wide_int_to_tree (bitsizetype, newsize); > > > > + tree ntype > > > > + = build_nonstandard_integer_type (newsize, 1); } > > > > > > Maybe use `build_nonstandard_integer_type (newsize, /* unsignedp = */ > > > true);` or better yet `build_nonstandard_integer_type (newsize, > > > UNSIGNED);` > > > > Ah, will do, > > Tamar. > > > > > > > > I had started to convert some of the unsignedp into enum signop but I never > > > finished or submitted the patch. > > > > > > Thanks, > > > Andrew Pinski > > > > > > > > > > + (if (ntype) > > > > + (convert:type (BIT_FIELD_REF:ntype @0 { nsize; } (op @2 > > > > + @3)))))))) > > > > + > > > > (simplify > > > > (BIT_FIELD_REF (BIT_FIELD_REF @0 @1 @2) @3 @4) > > > > (BIT_FIELD_REF @0 @3 { const_binop (PLUS_EXPR, bitsizetype, @2, @4); > > > > })) > > > > > > > > > > > > > > > > > > > > --