public inbox for gcc-cvs@sourceware.org help / color / mirror / Atom feed
From: Philipp Tomsich <ptomsich@gcc.gnu.org> To: gcc-cvs@gcc.gnu.org Subject: [gcc(refs/vendors/vrull/heads/slp-improvements)] tree-optimization: use fewer lanes on VEC_PERM_EXPR for two operators Date: Tue, 27 Feb 2024 13:37:28 +0000 (GMT) [thread overview] Message-ID: <20240227133728.75FBC3858438@sourceware.org> (raw) https://gcc.gnu.org/g:627830ba3d5fe1e233cc1dd88572fb1a24aed2ef commit 627830ba3d5fe1e233cc1dd88572fb1a24aed2ef Author: Manolis Tsamis <manolis.tsamis@vrull.eu> Date: Fri Nov 17 17:42:30 2023 +0100 tree-optimization: use fewer lanes on VEC_PERM_EXPR for two operators Currently when SLP nodes are built with "two_operators == true" the VEC_PERM_EXPR that merges the result selects a lane only based on the operator found. In the case that the input nodes have duplicate elements there may be more than one ways to chose. This commit OBtries to use an existing lane if possible, which can free up lanes that can be used in other optimizations. For example, given two vectors with duplicates: A = {a1, a1, a2, a2} B = {b1, b1, b2, b2} a two_operator node with operators +, -, +, - can be built as: RES = VEC_PERM_EXPR<A, B>(0, 4, 2, 6) and use 2 lanes with this commit. The existing implementation would have built a (0, 5, 2, 7) permutation and have used 4 lanes. This commit adds a case that if the current element can be found in another lane that has been used previously then that lane will be reused. This can happen when the ONE and TWO contain duplicate elements and reduces the number of 'active' lanes. Diff: --- gcc/tree-vect-slp.cc | 20 +++++++++++++++++++- 1 file changed, 19 insertions(+), 1 deletion(-) diff --git a/gcc/tree-vect-slp.cc b/gcc/tree-vect-slp.cc index 238a17ca4e1..c5e9833653d 100644 --- a/gcc/tree-vect-slp.cc +++ b/gcc/tree-vect-slp.cc @@ -2906,7 +2906,25 @@ fail: gassign *ostmt = as_a <gassign *> (ostmt_info->stmt); if (gimple_assign_rhs_code (ostmt) != code0) { - SLP_TREE_LANE_PERMUTATION (node).safe_push (std::make_pair (1, i)); + /* If the current element can be found in another lane that has + been used previously then use that one instead. This can + happen when the ONE and TWO contain duplicate elements and + reduces the number of 'active' lanes. */ + int idx = i; + for (int alt_idx = (int) i - 1; alt_idx >= 0; alt_idx--) + { + gassign *alt_stmt = as_a <gassign *> (stmts[alt_idx]->stmt); + if (gimple_assign_rhs_code (alt_stmt) == code0 + && gimple_assign_rhs1 (ostmt) + == gimple_assign_rhs1 (alt_stmt) + && gimple_assign_rhs2 (ostmt) + == gimple_assign_rhs2 (alt_stmt)) + { + idx = alt_idx; + break; + } + } + SLP_TREE_LANE_PERMUTATION (node).safe_push (std::make_pair (1, idx)); ocode = gimple_assign_rhs_code (ostmt); j = i; }
next reply other threads:[~2024-02-27 13:37 UTC|newest] Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top 2024-02-27 13:37 Philipp Tomsich [this message] -- strict thread matches above, loose matches on Subject: below -- 2024-01-23 20:57 Philipp Tomsich 2024-01-17 19:14 Philipp Tomsich 2023-11-28 13:35 Philipp Tomsich
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20240227133728.75FBC3858438@sourceware.org \ --to=ptomsich@gcc.gnu.org \ --cc=gcc-cvs@gcc.gnu.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).