* [pushed] aarch64: Commonise some folding code
@ 2022-10-20 9:42 Richard Sandiford
0 siblings, 0 replies; only message in thread
From: Richard Sandiford @ 2022-10-20 9:42 UTC (permalink / raw)
To: gcc-patches
Add an aarch64_sve::gimple_folder helper for folding calls
to integer constants. SME will make more use of this.
Tested on aarch64-linux-gnu & pushed.
Richard
gcc/
* config/aarch64/aarch64-sve-builtins.h
(gimple_folder::fold_to_cstu): New member function.
* config/aarch64/aarch64-sve-builtins.cc
(gimple_folder::fold_to_cstu): Define.
* config/aarch64/aarch64-sve-builtins-base.cc
(svcnt_bhwd_impl::fold): Use it.
---
gcc/config/aarch64/aarch64-sve-builtins-base.cc | 9 ++-------
gcc/config/aarch64/aarch64-sve-builtins.cc | 7 +++++++
gcc/config/aarch64/aarch64-sve-builtins.h | 1 +
3 files changed, 10 insertions(+), 7 deletions(-)
diff --git a/gcc/config/aarch64/aarch64-sve-builtins-base.cc b/gcc/config/aarch64/aarch64-sve-builtins-base.cc
index 141f44d4d94..23b4d42822a 100644
--- a/gcc/config/aarch64/aarch64-sve-builtins-base.cc
+++ b/gcc/config/aarch64/aarch64-sve-builtins-base.cc
@@ -517,9 +517,7 @@ public:
gimple *
fold (gimple_folder &f) const override
{
- tree count = build_int_cstu (TREE_TYPE (f.lhs),
- GET_MODE_NUNITS (m_ref_mode));
- return gimple_build_assign (f.lhs, count);
+ return f.fold_to_cstu (GET_MODE_NUNITS (m_ref_mode));
}
rtx
@@ -553,10 +551,7 @@ public:
unsigned int elements_per_vq = 128 / GET_MODE_UNIT_BITSIZE (m_ref_mode);
HOST_WIDE_INT value = aarch64_fold_sve_cnt_pat (pattern, elements_per_vq);
if (value >= 0)
- {
- tree count = build_int_cstu (TREE_TYPE (f.lhs), value);
- return gimple_build_assign (f.lhs, count);
- }
+ return f.fold_to_cstu (value);
return NULL;
}
diff --git a/gcc/config/aarch64/aarch64-sve-builtins.cc b/gcc/config/aarch64/aarch64-sve-builtins.cc
index 63b1358c138..37228f6389a 100644
--- a/gcc/config/aarch64/aarch64-sve-builtins.cc
+++ b/gcc/config/aarch64/aarch64-sve-builtins.cc
@@ -2615,6 +2615,13 @@ gimple_folder::redirect_call (const function_instance &instance)
return call;
}
+/* Fold the call to constant VAL. */
+gimple *
+gimple_folder::fold_to_cstu (poly_uint64 val)
+{
+ return gimple_build_assign (lhs, build_int_cstu (TREE_TYPE (lhs), val));
+}
+
/* Fold the call to a PTRUE, taking the element size from type suffix 0. */
gimple *
gimple_folder::fold_to_ptrue ()
diff --git a/gcc/config/aarch64/aarch64-sve-builtins.h b/gcc/config/aarch64/aarch64-sve-builtins.h
index 63d1db776f7..0d130b871d0 100644
--- a/gcc/config/aarch64/aarch64-sve-builtins.h
+++ b/gcc/config/aarch64/aarch64-sve-builtins.h
@@ -500,6 +500,7 @@ public:
tree load_store_cookie (tree);
gimple *redirect_call (const function_instance &);
+ gimple *fold_to_cstu (poly_uint64);
gimple *fold_to_pfalse ();
gimple *fold_to_ptrue ();
gimple *fold_to_vl_pred (unsigned int);
--
2.25.1
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2022-10-20 9:42 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-20 9:42 [pushed] aarch64: Commonise some folding code Richard Sandiford
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).