public inbox for gcc-cvs@sourceware.org help / color / mirror / Atom feed
From: Richard Biener <rguenth@gcc.gnu.org> To: gcc-cvs@gcc.gnu.org Subject: [gcc r11-9584] tree-optimization/103544 - SLP reduction chain as SLP reduction issue Date: Thu, 17 Feb 2022 10:13:52 +0000 (GMT) [thread overview] Message-ID: <20220217101352.2E1F83858D20@sourceware.org> (raw) https://gcc.gnu.org/g:d1dc0f6222ecb6eda674e708235c7bc0d90fa987 commit r11-9584-gd1dc0f6222ecb6eda674e708235c7bc0d90fa987 Author: Richard Biener <rguenther@suse.de> Date: Mon Dec 6 11:43:28 2021 +0100 tree-optimization/103544 - SLP reduction chain as SLP reduction issue When SLP reduction chain vectorization support added handling of an outer conversion in the chain picking a failed reduction up as SLP reduction that broke the invariant that the whole reduction was forward reachable. The following plugs that hole noting a future enhancement possibility. 2021-12-06 Richard Biener <rguenther@suse.de> PR tree-optimization/103544 * tree-vect-slp.c (vect_analyze_slp): Only add a SLP reduction opportunity if the stmt in question is the reduction root. * gcc.dg/vect/pr103544.c: New testcase. (cherry picked from commit ee01694151edc7e8aef84dc3c484469e2ae443a0) Diff: --- gcc/testsuite/gcc.dg/vect/pr103544.c | 24 ++++++++++++++++++++++++ gcc/tree-vect-slp.c | 9 +++++++-- 2 files changed, 31 insertions(+), 2 deletions(-) diff --git a/gcc/testsuite/gcc.dg/vect/pr103544.c b/gcc/testsuite/gcc.dg/vect/pr103544.c new file mode 100644 index 00000000000..c8bdee86e77 --- /dev/null +++ b/gcc/testsuite/gcc.dg/vect/pr103544.c @@ -0,0 +1,24 @@ +/* { dg-do compile } */ +/* { dg-additional-options "-O3" } */ +/* { dg-additional-options "-march=haswell" { target x86_64-*-* i?86-*-* } } */ + +int crash_me(char* ptr, unsigned long size) +{ + short result[16] = {0}; + + unsigned long no_iters = 0; + for(unsigned long i = 0; i < size - 12; i+= 13){ + for(unsigned long j = 0; j < 12; j++){ + result[j] += ptr[i + j] - '0'; + } + no_iters++; + } + + int result_int = 0; + for(int j = 0; j < 12; j++){ + int bit_value = result[j] > no_iters/2 ? 1 : 0; + result_int |= bit_value; + } + + return result_int; +} diff --git a/gcc/tree-vect-slp.c b/gcc/tree-vect-slp.c index 230ff4081a5..ee39527eb69 100644 --- a/gcc/tree-vect-slp.c +++ b/gcc/tree-vect-slp.c @@ -2915,8 +2915,13 @@ vect_analyze_slp (vec_info *vinfo, unsigned max_tree_size) vinfo = next; } STMT_VINFO_DEF_TYPE (first_element) = vect_internal_def; - /* It can be still vectorized as part of an SLP reduction. */ - loop_vinfo->reductions.safe_push (last); + /* It can be still vectorized as part of an SLP reduction. + ??? But only if we didn't skip a conversion around the group. + In that case we'd have to reverse engineer that conversion + stmt following the chain using reduc_idx and from the PHI + using reduc_def. */ + if (STMT_VINFO_DEF_TYPE (last) == vect_reduction_def) + loop_vinfo->reductions.safe_push (last); } /* Find SLP sequences starting from groups of reductions. */
reply other threads:[~2022-02-17 10:13 UTC|newest] Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20220217101352.2E1F83858D20@sourceware.org \ --to=rguenth@gcc.gnu.org \ --cc=gcc-cvs@gcc.gnu.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).