public inbox for gcc-bugs@sourceware.org
help / color / mirror / Atom feed
From: "rguenther at suse dot de" <gcc-bugzilla@gcc.gnu.org>
To: gcc-bugs@gcc.gnu.org
Subject: [Bug target/107160] [13 regression] r13-2641-g0ee1548d96884d causes verification failure in spec2006
Date: Thu, 13 Oct 2022 11:05:59 +0000 [thread overview]
Message-ID: <bug-107160-4-aUJs3ZFcUX@http.gcc.gnu.org/bugzilla/> (raw)
In-Reply-To: <bug-107160-4@http.gcc.gnu.org/bugzilla/>
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107160
--- Comment #10 from rguenther at suse dot de <rguenther at suse dot de> ---
On Thu, 13 Oct 2022, linkw at gcc dot gnu.org wrote:
> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107160
>
> --- Comment #9 from Kewen Lin <linkw at gcc dot gnu.org> ---
> >
> > The above doesn't look wrong (but may miss the rest of the IL). On
> > x86_64 this looks like
> >
> > <bb 4> [local count: 105119324]:
> > # sum0_41 = PHI <sum0_28(3)>
> > # sum1_39 = PHI <sum1_29(3)>
> > # sum2_37 = PHI <sum2_30(3)>
> > # sum3_35 = PHI <sum3_31(3)>
> > # vect_sum3_31.11_59 = PHI <vect_sum3_31.11_60(3)>
> > _58 = BIT_FIELD_REF <vect_sum3_31.11_59, 32, 0>;
> > _57 = BIT_FIELD_REF <vect_sum3_31.11_59, 32, 32>;
> > _56 = BIT_FIELD_REF <vect_sum3_31.11_59, 32, 64>;
> > _55 = BIT_FIELD_REF <vect_sum3_31.11_59, 32, 96>;
> > _74 = _58 + _57;
> > _76 = _56 + _74;
> > _78 = _55 + _76;
> >
> > <bb 5> [local count: 118111600]:
> > # prephitmp_79 = PHI <_78(4), 0.0(2)>
> > return prephitmp_79;
> >
>
> Yeah, it looks expected without unrolling.
>
> > when unrolling is applied, thus with a larger VF, you should ideally
> > see the vectors accumulated.
> >
> > Btw, I've fixed a SLP reduction issue two days ago in
> > r13-3226-gee467644c53ee2
> > though that looks unrelated?
>
> Thanks for the information, I'll double check it.
>
> >
> > When I force a larger VF on x86 by adding a int store in the loop I see
> >
> > <bb 11> [local count: 94607391]:
> > # sum0_48 = PHI <sum0_29(3)>
> > # sum1_36 = PHI <sum1_30(3)>
> > # sum2_35 = PHI <sum2_31(3)>
> > # sum3_24 = PHI <sum3_32(3)>
> > # vect_sum3_32.16_110 = PHI <vect_sum3_32.16_106(3)>
> > # vect_sum3_32.16_111 = PHI <vect_sum3_32.16_107(3)>
> > # vect_sum3_32.16_112 = PHI <vect_sum3_32.16_108(3)>
> > # vect_sum3_32.16_113 = PHI <vect_sum3_32.16_109(3)>
> > _114 = BIT_FIELD_REF <vect_sum3_32.16_110, 32, 0>;
> > _115 = BIT_FIELD_REF <vect_sum3_32.16_110, 32, 32>;
> > _116 = BIT_FIELD_REF <vect_sum3_32.16_110, 32, 64>;
> > _117 = BIT_FIELD_REF <vect_sum3_32.16_110, 32, 96>;
> > _118 = BIT_FIELD_REF <vect_sum3_32.16_111, 32, 0>;
> > _119 = BIT_FIELD_REF <vect_sum3_32.16_111, 32, 32>;
> > _120 = BIT_FIELD_REF <vect_sum3_32.16_111, 32, 64>;
> > _121 = BIT_FIELD_REF <vect_sum3_32.16_111, 32, 96>;
> > _122 = BIT_FIELD_REF <vect_sum3_32.16_112, 32, 0>;
> > _123 = BIT_FIELD_REF <vect_sum3_32.16_112, 32, 32>;
> > _124 = BIT_FIELD_REF <vect_sum3_32.16_112, 32, 64>;
> > _125 = BIT_FIELD_REF <vect_sum3_32.16_112, 32, 96>;
> > _126 = BIT_FIELD_REF <vect_sum3_32.16_113, 32, 0>;
> > _127 = BIT_FIELD_REF <vect_sum3_32.16_113, 32, 32>;
> > _128 = BIT_FIELD_REF <vect_sum3_32.16_113, 32, 64>;
> > _129 = BIT_FIELD_REF <vect_sum3_32.16_113, 32, 96>;
> > _130 = _114 + _118;
> > _131 = _115 + _119;
> > _132 = _116 + _120;
> > _133 = _117 + _121;
> > _134 = _130 + _122;
> > _135 = _131 + _123;
> > _136 = _132 + _124;
> > _137 = _133 + _125;
> > _138 = _134 + _126;
> >
> > see how the lanes from the different vectors are accumulated? (yeah,
> > we should simply add the vectors!)
>
> Yes, it's the same as what I saw on ppc64le, but the closely following dce6
> removes the three vect_sum3_32 (in your dump, they are
> vect_sum3_32.16_10{7,8,9}) as the subsequent joints don't actually use the
> separated accumulated lane values (_138 -> sum0 ...) but only use
> vect_sum3_32.16_110.
I do - the epilog is even vectorized and it works fine at runtime.
next prev parent reply other threads:[~2022-10-13 11:06 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-10-05 16:21 [Bug target/107160] New: " seurer at gcc dot gnu.org
2022-10-06 10:43 ` [Bug target/107160] " rguenth at gcc dot gnu.org
2022-10-06 14:04 ` seurer at gcc dot gnu.org
2022-10-06 19:08 ` seurer at gcc dot gnu.org
2022-10-10 2:29 ` linkw at gcc dot gnu.org
2022-10-10 19:14 ` seurer at gcc dot gnu.org
2022-10-12 8:31 ` linkw at gcc dot gnu.org
2022-10-13 9:57 ` linkw at gcc dot gnu.org
2022-10-13 10:16 ` rguenth at gcc dot gnu.org
2022-10-13 10:31 ` linkw at gcc dot gnu.org
2022-10-13 11:05 ` rguenther at suse dot de [this message]
2022-10-13 11:45 ` linkw at gcc dot gnu.org
2022-10-13 11:50 ` rguenther at suse dot de
2022-10-13 12:01 ` marxin at gcc dot gnu.org
2022-10-13 12:05 ` rguenth at gcc dot gnu.org
2022-10-13 12:23 ` rguenth at gcc dot gnu.org
2022-10-13 13:17 ` cvs-commit at gcc dot gnu.org
2022-10-13 13:19 ` [Bug target/107160] [12/13 " rguenth at gcc dot gnu.org
2022-10-14 2:54 ` [Bug target/107160] [12 " linkw at gcc dot gnu.org
2022-10-17 13:10 ` cvs-commit at gcc dot gnu.org
2022-10-17 13:11 ` rguenth at gcc dot gnu.org
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bug-107160-4-aUJs3ZFcUX@http.gcc.gnu.org/bugzilla/ \
--to=gcc-bugzilla@gcc.gnu.org \
--cc=gcc-bugs@gcc.gnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).