public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Andrew Pinski <pinskia@gmail.com>
To: Tamar Christina <Tamar.Christina@arm.com>
Cc: "gcc-patches@gcc.gnu.org" <gcc-patches@gcc.gnu.org>,
	nd <nd@arm.com>,  "rguenther@suse.de" <rguenther@suse.de>
Subject: Re: [PATCH 1/2]middle-end Fold BIT_FIELD_REF and Shifts into BIT_FIELD_REFs alone
Date: Mon, 26 Sep 2022 08:24:55 -0700	[thread overview]
Message-ID: <CA+=Sn1=KZ9XhoV-PAALoQB=LNq8LO+c0WjTrXieav9p624j9jw@mail.gmail.com> (raw)
In-Reply-To: <VI1PR08MB532569CA4CEFFB153822FEDDFF529@VI1PR08MB5325.eurprd08.prod.outlook.com>

[-- Attachment #1: Type: text/plain, Size: 4281 bytes --]

On Sun, Sep 25, 2022 at 9:56 PM Tamar Christina <Tamar.Christina@arm.com> wrote:
>
> > -----Original Message-----
> > From: Andrew Pinski <pinskia@gmail.com>
> > Sent: Saturday, September 24, 2022 8:57 PM
> > To: Tamar Christina <Tamar.Christina@arm.com>
> > Cc: gcc-patches@gcc.gnu.org; nd <nd@arm.com>; rguenther@suse.de
> > Subject: Re: [PATCH 1/2]middle-end Fold BIT_FIELD_REF and Shifts into
> > BIT_FIELD_REFs alone
> >
> > On Fri, Sep 23, 2022 at 4:43 AM Tamar Christina via Gcc-patches <gcc-
> > patches@gcc.gnu.org> wrote:
> > >
> > > Hi All,
> > >
> > > This adds a match.pd rule that can fold right shifts and
> > > bit_field_refs of integers into just a bit_field_ref by adjusting the
> > > offset and the size of the extract and adds an extend to the previous size.
> > >
> > > Concretely turns:
> > >
> > > #include <arm_neon.h>
> > >
> > > unsigned int foor (uint32x4_t x)
> > > {
> > >     return x[1] >> 16;
> > > }
> > >
> > > which used to generate:
> > >
> > >   _1 = BIT_FIELD_REF <x_2(D), 32, 32>;
> > >   _3 = _1 >> 16;
> > >
> > > into
> > >
> > >   _4 = BIT_FIELD_REF <x_1(D), 16, 48>;
> > >   _2 = (unsigned int) _4;
> > >
> > > I currently limit the rewrite to only doing it if the resulting
> > > extract is in a mode the target supports. i.e. it won't rewrite it to
> > > extract say 13-bits because I worry that for targets that won't have a
> > > bitfield extract instruction this may be a de-optimization.
> >
> > It is only a de-optimization for the following case:
> > * vector extraction
> >
> > All other cases should be handled correctly in the middle-end when
> > expanding to RTL because they need to be handled for bit-fields anyways.
> > Plus SIGN_EXTRACT and ZERO_EXTRACT would be used in the integer case
> > for the RTL.
> > Getting SIGN_EXTRACT/ZERO_EXTRACT early on in the RTL is better than
> > waiting until combine really.
> >
>
> Fair enough, I've dropped the constraint.

Well the constraint should be done still for VECTOR_TYPE I think.
Attached is what I had done for left shift for integer types.
Note the BYTES_BIG_ENDIAN part which you missed for the right shift case.

Thanks,
Andrew Pinski

>
> >
> > >
> > > Bootstrapped Regtested on aarch64-none-linux-gnu, x86_64-pc-linux-gnu
> > > and no issues.
> > >
> > > Testcase are added in patch 2/2.
> > >
> > > Ok for master?
> > >
> > > Thanks,
> > > Tamar
> > >
> > > gcc/ChangeLog:
> > >
> > >         * match.pd: Add bitfield and shift folding.
> > >
> > > --- inline copy of patch --
> > > diff --git a/gcc/match.pd b/gcc/match.pd index
> > >
> > 1d407414bee278c64c00d425d9f025c1c58d853d..b225d36dc758f1581502c8d03
> > 761
> > > 544bfd499c01 100644
> > > --- a/gcc/match.pd
> > > +++ b/gcc/match.pd
> > > @@ -7245,6 +7245,23 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
> > >        && ANY_INTEGRAL_TYPE_P (type) && ANY_INTEGRAL_TYPE_P
> > (TREE_TYPE(@0)))
> > >    (IFN_REDUC_PLUS_WIDEN @0)))
> > >
> > > +/* Canonicalize BIT_FIELD_REFS and shifts to BIT_FIELD_REFS.  */ (for
> > > +shift (rshift)
> > > +     op (plus)
> > > + (simplify
> > > +  (shift (BIT_FIELD_REF @0 @1 @2) integer_pow2p@3)
> > > +  (if (INTEGRAL_TYPE_P (type))
> > > +   (with { /* Can't use wide-int here as the precision differs between
> > > +             @1 and @3.  */
> > > +          unsigned HOST_WIDE_INT size = tree_to_uhwi (@1);
> > > +          unsigned HOST_WIDE_INT shiftc = tree_to_uhwi (@3);
> > > +          unsigned HOST_WIDE_INT newsize = size - shiftc;
> > > +          tree nsize = wide_int_to_tree (bitsizetype, newsize);
> > > +          tree ntype
> > > +            = build_nonstandard_integer_type (newsize, 1); }
> >
> > Maybe use `build_nonstandard_integer_type (newsize, /* unsignedp = */
> > true);` or better yet `build_nonstandard_integer_type (newsize,
> > UNSIGNED);`
>
> Ah, will do,
> Tamar.
>
> >
> > I had started to convert some of the unsignedp into enum signop but I never
> > finished or submitted the patch.
> >
> > Thanks,
> > Andrew Pinski
> >
> >
> > > +    (if (ntype)
> > > +     (convert:type (BIT_FIELD_REF:ntype @0 { nsize; } (op @2
> > > + @3))))))))
> > > +
> > >  (simplify
> > >   (BIT_FIELD_REF (BIT_FIELD_REF @0 @1 @2) @3 @4)
> > >   (BIT_FIELD_REF @0 @3 { const_binop (PLUS_EXPR, bitsizetype, @2, @4);
> > > }))
> > >
> > >
> > >
> > >
> > > --

[-- Attachment #2: ed7c08c.diff --]
[-- Type: text/plain, Size: 2057 bytes --]

From ed7c08c4d565bd4418cf2dce3bbfecc18fdd42a2 Mon Sep 17 00:00:00 2001
From: Andrew Pinski <apinski@marvell.com>
Date: Wed, 25 Dec 2019 01:20:13 +0000
Subject: [PATCH] Add simplification of shift of a bit_field.

We can simplify a shift of a bit_field_ref to
a shift of an and (note sometimes the shift can
be removed).

Change-Id: I1a9f3fc87889ecd7cf569272405b6ee7dd5f8d7b
Signed-off-by: Andrew Pinski <apinski@marvell.com>
---

diff --git a/gcc/match.pd b/gcc/match.pd
index cb981ec..e4f6d47 100644
--- a/gcc/match.pd
+++ b/gcc/match.pd
@@ -6071,6 +6071,34 @@
     (cmp (bit_and @0 { wide_int_to_tree (type1, mask); })
          { wide_int_to_tree (type1, cst); })))))
 
+/* lshift<bitfield<>> -> shift(bit_and(@0, mask)) */
+(simplify
+ (lshift (convert (BIT_FIELD_REF@bit @0 @bitsize @bitpos)) INTEGER_CST@1)
+ (if (INTEGRAL_TYPE_P (type)
+      && INTEGRAL_TYPE_P (TREE_TYPE (@0))
+      && tree_fits_uhwi_p (@1)
+      && (tree_nop_conversion_p (type, TREE_TYPE (@0))
+	  || (TYPE_UNSIGNED (TREE_TYPE (@0))
+	      && TYPE_UNSIGNED (TREE_TYPE (@bit))
+	      && TYPE_UNSIGNED (type)
+	      && TYPE_PRECISION (type) > tree_to_uhwi (@bitsize))))
+  (with
+   {
+     unsigned HOST_WIDE_INT bitpos = tree_to_uhwi (@bitpos);
+     unsigned HOST_WIDE_INT bitsize = tree_to_uhwi (@bitsize);
+     if (BYTES_BIG_ENDIAN)
+       bitpos = TYPE_PRECISION (TREE_TYPE (@0)) - bitpos - bitsize;
+     wide_int wmask = wi::shifted_mask (bitpos, bitsize, false, TYPE_PRECISION (type));
+   }
+   (switch
+    (if (tree_to_uhwi (@1) == bitpos)
+     (bit_and (convert @0) { wide_int_to_tree (type, wmask); }))
+    (if (tree_to_uhwi (@1) > bitpos)
+     (lshift (bit_and (convert @0) { wide_int_to_tree (type, wmask); })
+	     { wide_int_to_tree (integer_type_node, tree_to_uhwi (@1) - bitpos); } ))
+    (if (tree_to_uhwi (@1) < bitpos)
+     (rshift (bit_and (convert @0) { wide_int_to_tree (type, wmask); })
+	     { wide_int_to_tree (integer_type_node, bitpos - tree_to_uhwi (@1)); } ))))))
 
 (if (canonicalize_math_after_vectorization_p ())
  (for fmas (FMA)

  parent reply	other threads:[~2022-09-26 15:25 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-23 11:42 Tamar Christina
2022-09-23 11:43 ` [PATCH 2/2]AArch64 Perform more late folding of reg moves and shifts which arrive after expand Tamar Christina
2022-09-23 14:32   ` Richard Sandiford
2022-10-31 11:48     ` Tamar Christina
2022-11-14 21:54       ` Richard Sandiford
2022-11-14 21:59         ` Richard Sandiford
2022-12-01 16:25           ` Tamar Christina
2022-12-01 18:38             ` Richard Sandiford
2022-09-24 18:38 ` [PATCH 1/2]middle-end Fold BIT_FIELD_REF and Shifts into BIT_FIELD_REFs alone Jeff Law
2022-09-28 13:19   ` Tamar Christina
2022-09-28 17:25     ` Jeff Law
2022-09-24 18:57 ` Andrew Pinski
2022-09-26  4:55   ` Tamar Christina
2022-09-26  8:05     ` Richard Biener
2022-09-26 15:24     ` Andrew Pinski [this message]
2022-09-27 12:40       ` Richard Biener
2022-10-31 11:51         ` Tamar Christina
2022-10-31 16:24           ` Jeff Law
2022-11-07 13:29           ` Richard Biener

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CA+=Sn1=KZ9XhoV-PAALoQB=LNq8LO+c0WjTrXieav9p624j9jw@mail.gmail.com' \
    --to=pinskia@gmail.com \
    --cc=Tamar.Christina@arm.com \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=nd@arm.com \
    --cc=rguenther@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).