public inbox for gcc-bugs@sourceware.org help / color / mirror / Atom feed
From: "rguenth at gcc dot gnu.org" <gcc-bugzilla@gcc.gnu.org> To: gcc-bugs@gcc.gnu.org Subject: [Bug tree-optimization/113552] [11/12/13/14 Regression] vectorizer generates calls to vector math routines with 1 simd lane. Date: Tue, 23 Jan 2024 10:39:49 +0000 [thread overview] Message-ID: <bug-113552-4-zsoINMERyE@http.gcc.gnu.org/bugzilla/> (raw) In-Reply-To: <bug-113552-4@http.gcc.gnu.org/bugzilla/> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113552 Richard Biener <rguenth at gcc dot gnu.org> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |jakub at gcc dot gnu.org --- Comment #6 from Richard Biener <rguenth at gcc dot gnu.org> --- (In reply to Tamar Christina from comment #5) > __attribute__ ((__simd__ ("notinbranch"), const)) > double cos (double); So here the backend is then probably responsible to parse this into a valid list of simdlen cases. > void foo (float *a, double *b) > { > for (int i = 0; i < 12; i+=3) > { > b[i] = cos (5.0 * a[i]); > b[i+1] = cos (5.0 * a[i+1]); > b[i+2] = cos (5.0 * a[i+2]); > } > } > > Simple C example that shows the problem. > > This seems to happen when SLP succeeds and the group size is a non power of > two. > The vectorizer then unrolls to make it a power of two and during > vectorization > it seems to destroy the vector, make the call and reconstruct it. > > So this seems like an SLP vectorization bug. I can't seem to trigger it > however on GCC < 14 since SLP consistently fails for all my examples because > it tries a mode that's larger than the vector size. On the 13 branch and x86_64 the above results in a large VF and using _ZGVbN2v_cos, same on trunk. > So It may be a GCC 14 only regression, but I think it's latent in the > vectorizer. I think there's sth odd with the backend here, but I can confirm the behavior. Note it analyzes and costs VF == 4 and V2DF resulting in 6 calls but then code generation comes along doing sth different!?
next prev parent reply other threads:[~2024-01-23 10:39 UTC|newest] Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top 2024-01-23 7:53 [Bug middle-end/113552] New: " tnfchris at gcc dot gnu.org 2024-01-23 7:53 ` [Bug tree-optimization/113552] " tnfchris at gcc dot gnu.org 2024-01-23 8:13 ` rguenth at gcc dot gnu.org 2024-01-23 8:51 ` nsz at gcc dot gnu.org 2024-01-23 8:51 ` tnfchris at gcc dot gnu.org 2024-01-23 8:54 ` tnfchris at gcc dot gnu.org 2024-01-23 10:12 ` tnfchris at gcc dot gnu.org 2024-01-23 10:39 ` rguenth at gcc dot gnu.org [this message] 2024-01-23 10:56 ` rguenth at gcc dot gnu.org 2024-01-23 10:59 ` rguenth at gcc dot gnu.org 2024-01-23 11:19 ` tnfchris at gcc dot gnu.org 2024-01-23 11:57 ` rguenth at gcc dot gnu.org 2024-01-23 13:10 ` cvs-commit at gcc dot gnu.org 2024-01-24 15:58 ` cvs-commit at gcc dot gnu.org 2024-04-15 11:14 ` [Bug tree-optimization/113552] [11/12/13 " cvs-commit at gcc dot gnu.org 2024-04-15 11:38 ` cvs-commit at gcc dot gnu.org 2024-04-15 11:40 ` tnfchris at gcc dot gnu.org
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=bug-113552-4-zsoINMERyE@http.gcc.gnu.org/bugzilla/ \ --to=gcc-bugzilla@gcc.gnu.org \ --cc=gcc-bugs@gcc.gnu.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).