public inbox for gcc-bugs@sourceware.org help / color / mirror / Atom feed
From: "rguenth at gcc dot gnu.org" <gcc-bugzilla@gcc.gnu.org> To: gcc-bugs@gcc.gnu.org Subject: [Bug target/114987] [14/15 Regression] floating point vector regression, x86, between gcc 14 and gcc-13 using -O3 and target clones on skylake platforms Date: Fri, 10 May 2024 07:52:37 +0000 [thread overview] Message-ID: <bug-114987-4-5kQ2WlvEjX@http.gcc.gnu.org/bugzilla/> (raw) In-Reply-To: <bug-114987-4@http.gcc.gnu.org/bugzilla/> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114987 Richard Biener <rguenth at gcc dot gnu.org> changed: What |Removed |Added ---------------------------------------------------------------------------- Status|UNCONFIRMED |NEW Summary|[14/15 regression] floating |[14/15 Regression] floating |point vector regression, |point vector regression, |x86, between gcc 14 and |x86, between gcc 14 and |gcc-13 using -O3 and target |gcc-13 using -O3 and target |clones on skylake platforms |clones on skylake platforms Ever confirmed|0 |1 Last reconfirmed| |2024-05-10 Target|x86_64 |x86_64-*-* Target Milestone|--- |14.2 --- Comment #4 from Richard Biener <rguenth at gcc dot gnu.org> --- I can't reproduce a slowdown on a Zen2 CPU. The difference seems to be merely instruction scheduling. I do note we're not doing a good job in handling for (i = 0; i < LOOPS_PER_CALL; i++) { r.v = r.v + add.v; } where r.v and add.v are AVX512 sized vectors when emulating them with AVX vectors. We end up with r_v_lsm.48_48 = r.v; _11 = add.v; <bb 3> [local count: 1063004408]: # r_v_lsm.48_50 = PHI <_12(3), r_v_lsm.48_48(2)> # ivtmp_56 = PHI <ivtmp_55(3), 65536(2)> _16 = BIT_FIELD_REF <_11, 256, 0>; _37 = BIT_FIELD_REF <r_v_lsm.48_50, 256, 0>; _29 = _16 + _37; _387 = BIT_FIELD_REF <_11, 256, 256>; _375 = BIT_FIELD_REF <r_v_lsm.48_50, 256, 256>; _363 = _387 + _375; _12 = {_29, _363}; ivtmp_55 = ivtmp_56 - 1; if (ivtmp_55 != 0) goto <bb 3>; [98.99%] else goto <bb 4>; [1.01%] <bb 4> [local count: 10737416]: after lowering from 512bit to 256bit vectors and there's no pass that would demote the 512bit reduction value to two 256bit ones. There's also weird things going on in the target/on RTL. A smaller testcase illustrating the code generation issue is typedef float v16sf __attribute__((vector_size(sizeof(float)*16))); void foo (v16sf * __restrict r, v16sf *a, int n) { for (int i = 0; i < n; ++i) *r = *r + *a; } So confirmed for non-optimal code but I don't see how it's a regression.
next prev parent reply other threads:[~2024-05-10 7:52 UTC|newest] Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top 2024-05-08 14:44 [Bug c/114987] New: " colin.king at intel dot com 2024-05-08 14:45 ` [Bug c/114987] " colin.king at intel dot com 2024-05-08 14:45 ` colin.king at intel dot com 2024-05-08 15:00 ` colin.king at intel dot com 2024-05-10 7:52 ` rguenth at gcc dot gnu.org [this message] 2024-05-10 8:00 ` [Bug target/114987] [14/15 Regression] " haochen.jiang at intel dot com 2024-05-10 8:05 ` liuhongt at gcc dot gnu.org 2024-05-10 8:42 ` haochen.jiang at intel dot com
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=bug-114987-4-5kQ2WlvEjX@http.gcc.gnu.org/bugzilla/ \ --to=gcc-bugzilla@gcc.gnu.org \ --cc=gcc-bugs@gcc.gnu.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).