From: Tamar Christina <tamar.christina@arm.com>
To: gcc-patches@gcc.gnu.org
Cc: nd@arm.com, rguenther@suse.de, jeffreyalaw@gmail.com
Subject: [PATCH 1/2]middle-end Fold BIT_FIELD_REF and Shifts into BIT_FIELD_REFs alone
Date: Fri, 23 Sep 2022 12:42:12 +0100 [thread overview]
Message-ID: <patch-15776-tamar@arm.com> (raw)
[-- Attachment #1: Type: text/plain, Size: 2146 bytes --]
Hi All,
This adds a match.pd rule that can fold right shifts and bit_field_refs of
integers into just a bit_field_ref by adjusting the offset and the size of the
extract and adds an extend to the previous size.
Concretely turns:
#include <arm_neon.h>
unsigned int foor (uint32x4_t x)
{
return x[1] >> 16;
}
which used to generate:
_1 = BIT_FIELD_REF <x_2(D), 32, 32>;
_3 = _1 >> 16;
into
_4 = BIT_FIELD_REF <x_1(D), 16, 48>;
_2 = (unsigned int) _4;
I currently limit the rewrite to only doing it if the resulting extract is in
a mode the target supports. i.e. it won't rewrite it to extract say 13-bits
because I worry that for targets that won't have a bitfield extract instruction
this may be a de-optimization.
Bootstrapped Regtested on aarch64-none-linux-gnu, x86_64-pc-linux-gnu
and no issues.
Testcase are added in patch 2/2.
Ok for master?
Thanks,
Tamar
gcc/ChangeLog:
* match.pd: Add bitfield and shift folding.
--- inline copy of patch --
diff --git a/gcc/match.pd b/gcc/match.pd
index 1d407414bee278c64c00d425d9f025c1c58d853d..b225d36dc758f1581502c8d03761544bfd499c01 100644
--- a/gcc/match.pd
+++ b/gcc/match.pd
@@ -7245,6 +7245,23 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
&& ANY_INTEGRAL_TYPE_P (type) && ANY_INTEGRAL_TYPE_P (TREE_TYPE(@0)))
(IFN_REDUC_PLUS_WIDEN @0)))
+/* Canonicalize BIT_FIELD_REFS and shifts to BIT_FIELD_REFS. */
+(for shift (rshift)
+ op (plus)
+ (simplify
+ (shift (BIT_FIELD_REF @0 @1 @2) integer_pow2p@3)
+ (if (INTEGRAL_TYPE_P (type))
+ (with { /* Can't use wide-int here as the precision differs between
+ @1 and @3. */
+ unsigned HOST_WIDE_INT size = tree_to_uhwi (@1);
+ unsigned HOST_WIDE_INT shiftc = tree_to_uhwi (@3);
+ unsigned HOST_WIDE_INT newsize = size - shiftc;
+ tree nsize = wide_int_to_tree (bitsizetype, newsize);
+ tree ntype
+ = build_nonstandard_integer_type (newsize, 1); }
+ (if (ntype)
+ (convert:type (BIT_FIELD_REF:ntype @0 { nsize; } (op @2 @3))))))))
+
(simplify
(BIT_FIELD_REF (BIT_FIELD_REF @0 @1 @2) @3 @4)
(BIT_FIELD_REF @0 @3 { const_binop (PLUS_EXPR, bitsizetype, @2, @4); }))
--
[-- Attachment #2: rb15776.patch --]
[-- Type: text/plain, Size: 1163 bytes --]
diff --git a/gcc/match.pd b/gcc/match.pd
index 1d407414bee278c64c00d425d9f025c1c58d853d..b225d36dc758f1581502c8d03761544bfd499c01 100644
--- a/gcc/match.pd
+++ b/gcc/match.pd
@@ -7245,6 +7245,23 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
&& ANY_INTEGRAL_TYPE_P (type) && ANY_INTEGRAL_TYPE_P (TREE_TYPE(@0)))
(IFN_REDUC_PLUS_WIDEN @0)))
+/* Canonicalize BIT_FIELD_REFS and shifts to BIT_FIELD_REFS. */
+(for shift (rshift)
+ op (plus)
+ (simplify
+ (shift (BIT_FIELD_REF @0 @1 @2) integer_pow2p@3)
+ (if (INTEGRAL_TYPE_P (type))
+ (with { /* Can't use wide-int here as the precision differs between
+ @1 and @3. */
+ unsigned HOST_WIDE_INT size = tree_to_uhwi (@1);
+ unsigned HOST_WIDE_INT shiftc = tree_to_uhwi (@3);
+ unsigned HOST_WIDE_INT newsize = size - shiftc;
+ tree nsize = wide_int_to_tree (bitsizetype, newsize);
+ tree ntype
+ = build_nonstandard_integer_type (newsize, 1); }
+ (if (ntype)
+ (convert:type (BIT_FIELD_REF:ntype @0 { nsize; } (op @2 @3))))))))
+
(simplify
(BIT_FIELD_REF (BIT_FIELD_REF @0 @1 @2) @3 @4)
(BIT_FIELD_REF @0 @3 { const_binop (PLUS_EXPR, bitsizetype, @2, @4); }))
next reply other threads:[~2022-09-23 11:42 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-23 11:42 Tamar Christina [this message]
2022-09-23 11:43 ` [PATCH 2/2]AArch64 Perform more late folding of reg moves and shifts which arrive after expand Tamar Christina
2022-09-23 14:32 ` Richard Sandiford
2022-10-31 11:48 ` Tamar Christina
2022-11-14 21:54 ` Richard Sandiford
2022-11-14 21:59 ` Richard Sandiford
2022-12-01 16:25 ` Tamar Christina
2022-12-01 18:38 ` Richard Sandiford
2022-09-24 18:38 ` [PATCH 1/2]middle-end Fold BIT_FIELD_REF and Shifts into BIT_FIELD_REFs alone Jeff Law
2022-09-28 13:19 ` Tamar Christina
2022-09-28 17:25 ` Jeff Law
2022-09-24 18:57 ` Andrew Pinski
2022-09-26 4:55 ` Tamar Christina
2022-09-26 8:05 ` Richard Biener
2022-09-26 15:24 ` Andrew Pinski
2022-09-27 12:40 ` Richard Biener
2022-10-31 11:51 ` Tamar Christina
2022-10-31 16:24 ` Jeff Law
2022-11-07 13:29 ` Richard Biener
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=patch-15776-tamar@arm.com \
--to=tamar.christina@arm.com \
--cc=gcc-patches@gcc.gnu.org \
--cc=jeffreyalaw@gmail.com \
--cc=nd@arm.com \
--cc=rguenther@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).