From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id 399CC3858C50; Mon, 4 Mar 2024 14:28:55 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 399CC3858C50 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1709562535; bh=lxPVBiQBBlS4gksmOS1N2DzFqhoIcOKcL87CPU4GP7A=; h=From:To:Subject:Date:In-Reply-To:References:From; b=Ho20gZPX3Ge9FYDIhIqIawzjJEdARR9HcqL5XAynVLBq1z+biiy76/0++jcRZHcfI 26MNt+1sOLrIw4nfdYcqA/SijdHTFiZPnq9Th+8VwQFZ+M+5svQ5GcHPujOvzvHYYn 31kdzAQmy3LkGtI2fkmGXBk6CHMFJ4aUs85Oz4UM= From: "rguenth at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug tree-optimization/113441] [14 Regression] Fail to fold the last element with multiple loop since g:2efe3a7de0107618397264017fb045f237764cc7 Date: Mon, 04 Mar 2024 14:28:54 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: tree-optimization X-Bugzilla-Version: 14.0 X-Bugzilla-Keywords: missed-optimization X-Bugzilla-Severity: normal X-Bugzilla-Who: rguenth at gcc dot gnu.org X-Bugzilla-Status: NEW X-Bugzilla-Resolution: X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: 14.0 X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D113441 --- Comment #37 from Richard Biener --- (In reply to Richard Sandiford from comment #36) > Created attachment 57602 [details] > proof-of-concept patch to suppress peeling for gaps >=20 > This patch does what I suggested in the previous comment: if the loop nee= ds > peeling for gaps, try again without that, and pick the better loop. It > seems to restore the original style of code for SVE. >=20 > A more polished version would be a bit smarter about when to retry. E.g. > it's pointless if the main loop already operates on full vectors (i.e. if > peeling 1 iteration is natural in any case). Perhaps the condition should > be that either (a) the number of epilogue iterations is known to be equal= to > the VF of the main loop or (b) the target is known to support partial > vectors for the loop's vector_mode. >=20 > Any thoughts? Even more iteration looks bad. I do wonder why when gather can avoid peeling for GAPs using load-lanes cannot? Also for the stores we seem to use elementwise stores rather than store-lanes. To me the most obvious thing to try optimizing in this testcase is DR analysis. With -march=3Darmv8.3-a I still see t.c:26:22: note: =3D=3D=3D vect_analyze_data_ref_accesses =3D=3D=3D t.c:26:22: note: Detected single element interleaving array1[0][_8] step 4 t.c:26:22: note: Detected single element interleaving array1[1][_8] step 4 t.c:26:22: note: Detected single element interleaving array1[2][_8] step 4 t.c:26:22: note: Detected single element interleaving array1[3][_8] step 4 t.c:26:22: note: Detected single element interleaving array1[0][_1] step 4 t.c:26:22: note: Detected single element interleaving array1[1][_1] step 4 t.c:26:22: note: Detected single element interleaving array1[2][_1] step 4 t.c:26:22: note: Detected single element interleaving array1[3][_1] step 4 t.c:26:22: missed: not consecutive access array2[_4][_8] =3D _69; t.c:26:22: note: using strided accesses t.c:26:22: missed: not consecutive access array2[_4][_1] =3D _67; t.c:26:22: note: using strided accesses so we don't figure Creating dr for array1[0][_1] base_address: &array1 offset from base address: (ssizetype) ((sizetype) (m_111 * 2) * 2) constant offset from base address: 0 step: 4 base alignment: 16 base misalignment: 0 offset alignment: 4 step alignment: 4 base_object: array1 Access function 0: {m_111 * 2, +, 2}_4 Access function 1: 0 Creating dr for array1[0][_8] analyze_innermost: success. base_address: &array1 offset from base address: (ssizetype) ((sizetype) (m_111 * 2 + 1) *= 2) constant offset from base address: 0 step: 4 base alignment: 16 base misalignment: 0 offset alignment: 2 step alignment: 4 base_object: array1 Access function 0: {m_111 * 2 + 1, +, 2}_4 Access function 1: 0 belong to the same group (but the access functions tell us it worked out). Above we fail to split the + 1 to the constant offset. See my hint to use int32_t m instead of uint32_t yielding t.c:26:22: note: Detected interleaving load of size 2 t.c:26:22: note: _2 =3D array1[0][_1]; t.c:26:22: note: _9 =3D array1[0][_8]; t.c:26:22: note: Detected interleaving load of size 2 t.c:26:22: note: _18 =3D array1[1][_1]; t.c:26:22: note: _23 =3D array1[1][_8]; t.c:26:22: note: Detected interleaving load of size 2 t.c:26:22: note: _32 =3D array1[2][_1]; t.c:26:22: note: _37 =3D array1[2][_8]; t.c:26:22: note: Detected interleaving load of size 2 t.c:26:22: note: _46 =3D array1[3][_1]; t.c:26:22: note: _51 =3D array1[3][_8]; t.c:26:22: note: Detected interleaving store of size 2 t.c:26:22: note: array2[_4][_1] =3D _67; t.c:26:22: note: array2[_4][_8] =3D _69; (and SLP being thrown away because we can use load/store lanes)=