public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH 1/2]middle-end Fold BIT_FIELD_REF and Shifts into BIT_FIELD_REFs alone
@ 2022-09-23 11:42 Tamar Christina
  2022-09-23 11:43 ` [PATCH 2/2]AArch64 Perform more late folding of reg moves and shifts which arrive after expand Tamar Christina
                   ` (2 more replies)
  0 siblings, 3 replies; 19+ messages in thread
From: Tamar Christina @ 2022-09-23 11:42 UTC (permalink / raw)
  To: gcc-patches; +Cc: nd, rguenther, jeffreyalaw

[-- Attachment #1: Type: text/plain, Size: 2146 bytes --]

Hi All,

This adds a match.pd rule that can fold right shifts and bit_field_refs of
integers into just a bit_field_ref by adjusting the offset and the size of the
extract and adds an extend to the previous size.

Concretely turns:

#include <arm_neon.h>

unsigned int foor (uint32x4_t x)
{
    return x[1] >> 16;
}

which used to generate:

  _1 = BIT_FIELD_REF <x_2(D), 32, 32>;
  _3 = _1 >> 16;

into

  _4 = BIT_FIELD_REF <x_1(D), 16, 48>;
  _2 = (unsigned int) _4;

I currently limit the rewrite to only doing it if the resulting extract is in
a mode the target supports. i.e. it won't rewrite it to extract say 13-bits
because I worry that for targets that won't have a bitfield extract instruction
this may be a de-optimization.

Bootstrapped Regtested on aarch64-none-linux-gnu, x86_64-pc-linux-gnu
and no issues.

Testcase are added in patch 2/2.

Ok for master?

Thanks,
Tamar

gcc/ChangeLog:

	* match.pd: Add bitfield and shift folding.

--- inline copy of patch -- 
diff --git a/gcc/match.pd b/gcc/match.pd
index 1d407414bee278c64c00d425d9f025c1c58d853d..b225d36dc758f1581502c8d03761544bfd499c01 100644
--- a/gcc/match.pd
+++ b/gcc/match.pd
@@ -7245,6 +7245,23 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
       && ANY_INTEGRAL_TYPE_P (type) && ANY_INTEGRAL_TYPE_P (TREE_TYPE(@0)))
   (IFN_REDUC_PLUS_WIDEN @0)))
 
+/* Canonicalize BIT_FIELD_REFS and shifts to BIT_FIELD_REFS.  */
+(for shift (rshift)
+     op (plus)
+ (simplify
+  (shift (BIT_FIELD_REF @0 @1 @2) integer_pow2p@3)
+  (if (INTEGRAL_TYPE_P (type))
+   (with { /* Can't use wide-int here as the precision differs between
+	      @1 and @3.  */
+	   unsigned HOST_WIDE_INT size = tree_to_uhwi (@1);
+	   unsigned HOST_WIDE_INT shiftc = tree_to_uhwi (@3);
+	   unsigned HOST_WIDE_INT newsize = size - shiftc;
+	   tree nsize = wide_int_to_tree (bitsizetype, newsize);
+	   tree ntype
+	     = build_nonstandard_integer_type (newsize, 1); }
+    (if (ntype)
+     (convert:type (BIT_FIELD_REF:ntype @0 { nsize; } (op @2 @3))))))))
+
 (simplify
  (BIT_FIELD_REF (BIT_FIELD_REF @0 @1 @2) @3 @4)
  (BIT_FIELD_REF @0 @3 { const_binop (PLUS_EXPR, bitsizetype, @2, @4); }))




-- 

[-- Attachment #2: rb15776.patch --]
[-- Type: text/plain, Size: 1163 bytes --]

diff --git a/gcc/match.pd b/gcc/match.pd
index 1d407414bee278c64c00d425d9f025c1c58d853d..b225d36dc758f1581502c8d03761544bfd499c01 100644
--- a/gcc/match.pd
+++ b/gcc/match.pd
@@ -7245,6 +7245,23 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
       && ANY_INTEGRAL_TYPE_P (type) && ANY_INTEGRAL_TYPE_P (TREE_TYPE(@0)))
   (IFN_REDUC_PLUS_WIDEN @0)))
 
+/* Canonicalize BIT_FIELD_REFS and shifts to BIT_FIELD_REFS.  */
+(for shift (rshift)
+     op (plus)
+ (simplify
+  (shift (BIT_FIELD_REF @0 @1 @2) integer_pow2p@3)
+  (if (INTEGRAL_TYPE_P (type))
+   (with { /* Can't use wide-int here as the precision differs between
+	      @1 and @3.  */
+	   unsigned HOST_WIDE_INT size = tree_to_uhwi (@1);
+	   unsigned HOST_WIDE_INT shiftc = tree_to_uhwi (@3);
+	   unsigned HOST_WIDE_INT newsize = size - shiftc;
+	   tree nsize = wide_int_to_tree (bitsizetype, newsize);
+	   tree ntype
+	     = build_nonstandard_integer_type (newsize, 1); }
+    (if (ntype)
+     (convert:type (BIT_FIELD_REF:ntype @0 { nsize; } (op @2 @3))))))))
+
 (simplify
  (BIT_FIELD_REF (BIT_FIELD_REF @0 @1 @2) @3 @4)
  (BIT_FIELD_REF @0 @3 { const_binop (PLUS_EXPR, bitsizetype, @2, @4); }))




^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2022-12-01 18:38 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-23 11:42 [PATCH 1/2]middle-end Fold BIT_FIELD_REF and Shifts into BIT_FIELD_REFs alone Tamar Christina
2022-09-23 11:43 ` [PATCH 2/2]AArch64 Perform more late folding of reg moves and shifts which arrive after expand Tamar Christina
2022-09-23 14:32   ` Richard Sandiford
2022-10-31 11:48     ` Tamar Christina
2022-11-14 21:54       ` Richard Sandiford
2022-11-14 21:59         ` Richard Sandiford
2022-12-01 16:25           ` Tamar Christina
2022-12-01 18:38             ` Richard Sandiford
2022-09-24 18:38 ` [PATCH 1/2]middle-end Fold BIT_FIELD_REF and Shifts into BIT_FIELD_REFs alone Jeff Law
2022-09-28 13:19   ` Tamar Christina
2022-09-28 17:25     ` Jeff Law
2022-09-24 18:57 ` Andrew Pinski
2022-09-26  4:55   ` Tamar Christina
2022-09-26  8:05     ` Richard Biener
2022-09-26 15:24     ` Andrew Pinski
2022-09-27 12:40       ` Richard Biener
2022-10-31 11:51         ` Tamar Christina
2022-10-31 16:24           ` Jeff Law
2022-11-07 13:29           ` Richard Biener

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).