From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by sourceware.org (Postfix) with ESMTP id 99CAE3858004 for ; Wed, 5 May 2021 17:44:57 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 99CAE3858004 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-79-X6JErpiOOX2_LEM_sw_bPw-1; Wed, 05 May 2021 13:44:52 -0400 X-MC-Unique: X6JErpiOOX2_LEM_sw_bPw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1FD1784A5E5; Wed, 5 May 2021 17:44:51 +0000 (UTC) Received: from tucnak.zalov.cz (ovpn-114-59.ams2.redhat.com [10.36.114.59]) by smtp.corp.redhat.com (Postfix) with ESMTPS id ACDDD5D9D5; Wed, 5 May 2021 17:44:50 +0000 (UTC) Received: from tucnak.zalov.cz (localhost [127.0.0.1]) by tucnak.zalov.cz (8.16.1/8.16.1) with ESMTPS id 145HilRZ256506 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NOT); Wed, 5 May 2021 19:44:47 +0200 Received: (from jakub@localhost) by tucnak.zalov.cz (8.16.1/8.16.1/Submit) id 145Hikob256505; Wed, 5 May 2021 19:44:46 +0200 Date: Wed, 5 May 2021 19:44:46 +0200 From: Jakub Jelinek To: Richard Sandiford , Hongtao Liu , Eric Botcazou , Jeff Law Cc: gcc-patches@gcc.gnu.org Subject: [PATCH] regcprop: Fix another cprop_hardreg bug [PR100342] Message-ID: <20210505174446.GU1179226@tucnak> Reply-To: Jakub Jelinek References: <20210119144514.GA4020736@tucnak> MIME-Version: 1.0 In-Reply-To: X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-6.1 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H4, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 May 2021 17:44:59 -0000 On Tue, Jan 19, 2021 at 04:10:33PM +0000, Richard Sandiford via Gcc-patches wrote: > Ah, ok, thanks for the extra context. > > So AIUI the problem when recording xmm2<-di isn't just: > > [A] partial_subreg_p (vd->e[sr].mode, GET_MODE (src)) > > but also that: > > [B] partial_subreg_p (vd->e[sr].mode, vd->e[vd->e[sr].oldest_regno].mode) > > For example, all registers in this sequence can be part of the same chain: > > (set (reg:HI R1) (reg:HI R0)) > (set (reg:SI R2) (reg:SI R1)) // [A] > (set (reg:DI R3) (reg:DI R2)) // [A] > (set (reg:SI R4) (reg:SI R[0-3])) > (set (reg:HI R5) (reg:HI R[0-4])) > > But: > > (set (reg:SI R1) (reg:SI R0)) > (set (reg:HI R2) (reg:HI R1)) > (set (reg:SI R3) (reg:SI R2)) // [A] && [B] > > is problematic because it dips below the precision of the oldest regno > and then increases again. > > When this happens, I guess we have two choices: > > (1) what the patch does: treat R3 as the start of a new chain. > (2) pretend that the copy occured in vd->e[sr].mode instead > (i.e. copy vd->e[sr].mode to vd->e[dr].mode) > > I guess (2) would need to be subject to REG_CAN_CHANGE_MODE_P. > Maybe the optimisation provided by (2) compared to (1) isn't common > enough to be worth the complication. > > I think we should test [B] as well as [A] though. The pass is set > up to do some quite elaborate mode changes and I think rejecting > [A] on its own would make some of the other code redundant. > It also feels like it should be a seperate “if” or “else if”, > with its own comment. Unfortunately, we now have a testcase that shows that testing also [B] is a problem (unfortunately now latent on the trunk, only reproduces on 10 and 11 branches). The comment in the patch tries to list just the interesting instructions, we have a 64-bit value, copy low 8 bit of those to another register, copy full 64 bits to another register and then clobber the original register. Before that (set (reg:DI r14) (const_int ...)) we have a chain DI r14, QI si, DI bp , that instruction drops the DI r14 from that chain, so we have QI si, DI bp , si being the oldest_regno. Next DI si is copied into DI dx. Only the low 8 bits of that are defined, the rest is unspecified, but we would add DI dx into that same chain at the end, so QI si, DI bp, DI dx [*]. Next si is overwritten, so the chain is DI bp, DI dx. And then we see (set (reg:DI dx) (reg:DI bp)) and remove it as redundant, because we think bp and dx are already equivalent, when in reality that is true only for the lowpart 8 bits. I believe the [*] marked step above is where the bug is. The committed regcprop.c (copy_value) change (but only committed to trunk/11, not to 10) added else if (partial_subreg_p (vd->e[sr].mode, GET_MODE (src)) && partial_subreg_p (vd->e[sr].mode, vd->e[vd->e[sr].oldest_regno].mode)) return; and while the first partial_subreg_p call returns true, the second one doesn't; before the (set (reg:DI r14) (const_int ...)) insn it would be true and we'd return, but as that reg got clobbered, si became the oldest regno in the chain and so vd->e[vd->e[sr].oldest_regno].mode is QImode and vd->e[sr].mode is QImode too, so the second partial_subreg_p is false. But as the testcase shows, what is the oldest_regno in the chain is something that changes over time, so relying on it for anything is problematic, something could have a different oldest_regno and later on get a different oldest_regno (perhaps with different mode) because the oldest_regno got overwritten and it can change both ways. I wrote the following patch (originally against 10 branch because that is where Uros has been debugging it) and bootstrapped/regtested it on 11 branch successfully. It effectively implements your (2) above; I'm not sure if REG_CAN_CHANGE_MODE_P is needed there, because it is already tested in find_oldest_value_reg -> maybe_mode_change -> mode_change_ok. So perhaps just the vd->e[dr].mode in there could change to GET_MODE (src) and drop the previous PR98694 change? If yes, what to do with the previously added comment? 2021-05-05 Jakub Jelinek PR rtl-optimization/100342 * regcprop.c (copy_value): When copying a source reg in a wider mode than it has recorded for the value, adjust recorded destination mode too. * gcc.target/i386/pr100342.c: New test. --- gcc/regcprop.c.jj 2020-04-30 17:41:37.624675304 +0200 +++ gcc/regcprop.c 2021-05-05 16:24:01.667308941 +0200 @@ -358,6 +358,22 @@ copy_value (rtx dest, rtx src, struct va else if (sn > hard_regno_nregs (sr, vd->e[sr].mode)) return; + /* If a narrower value is copied using wider mode, the upper bits + are undefined (could be e.g. a former paradoxical subreg). Signal + in that case we've only copied value using the narrower mode. + Consider: + (set (reg:DI r14) (mem:DI ...)) + (set (reg:QI si) (reg:QI r14)) + (set (reg:DI bp) (reg:DI r14)) + (set (reg:DI r14) (const_int ...)) + (set (reg:DI dx) (reg:DI si)) + (set (reg:DI si) (const_int ...)) + (set (reg:DI dx) (reg:DI bp)) + The last set is not redundant, while the low 8 bits of dx are already + equal to low 8 bits of bp, the other bits are undefined. */ + if (partial_subreg_p (vd->e[sr].mode, vd->e[dr].mode)) + set_value_regno (dr, vd->e[sr].mode, vd); + /* Link DR at the end of the value chain used by SR. */ vd->e[dr].oldest_regno = vd->e[sr].oldest_regno; --- gcc/testsuite/gcc.target/i386/pr100342.c.jj 2021-05-05 17:01:29.139356719 +0200 +++ gcc/testsuite/gcc.target/i386/pr100342.c 2021-05-05 17:01:14.287521150 +0200 @@ -0,0 +1,70 @@ +/* PR rtl-optimization/100342 */ +/* { dg-do run { target int128 } } */ +/* { dg-options "-O2 -fno-dse -fno-forward-propagate -Wno-psabi -mno-sse2" } */ + +#define SHL(x, y) ((x) << ((y) & (sizeof(x) * 8 - 1))) +#define SHR(x, y) ((x) >> ((y) & (sizeof(x) * 8 - 1))) +#define ROR(x, y) (SHR(x, y)) | (SHL(x, (sizeof(x) * 8 - (y)))) +#define SHLV(x, y) ((x) << ((y) & (sizeof((x)[0]) * 8 - 1))) +#define SHLSV(x, y) ((x) << ((y) & (sizeof((y)[0]) * 8 - 1))) +typedef unsigned char A; +typedef unsigned char __attribute__((__vector_size__ (8))) B; +typedef unsigned char __attribute__((__vector_size__ (16))) C; +typedef unsigned char __attribute__((__vector_size__ (32))) D; +typedef unsigned char __attribute__((__vector_size__ (64))) E; +typedef unsigned short F; +typedef unsigned short __attribute__((__vector_size__ (16))) G; +typedef unsigned int H; +typedef unsigned int __attribute__((__vector_size__ (32))) I; +typedef unsigned long long J; +typedef unsigned long long __attribute__((__vector_size__ (8))) K; +typedef unsigned long long __attribute__((__vector_size__ (32))) L; +typedef unsigned long long __attribute__((__vector_size__ (64))) M; +typedef unsigned __int128 N; +typedef unsigned __int128 __attribute__((__vector_size__ (16))) O; +typedef unsigned __int128 __attribute__((__vector_size__ (32))) P; +typedef unsigned __int128 __attribute__((__vector_size__ (64))) Q; +B v1; +D v2; +L v3; +K v4; +I v5; +O v6; + +B +foo (A a, C b, E c, F d, G e, H f, J g, M h, N i, P j, Q k) +{ + b &= (A) f; + k += a; + G l = e; + D m = v2 >= (A) (J) v1; + J r = a + g; + L n = v3 <= f; + k -= i / f; + l -= (A) g; + c |= (A) d; + b -= (A) i; + J o = ROR (__builtin_clz (r), a); + K p = v4 | f, q = v4 <= f; + P s = SHLV (SHLSV (__builtin_bswap64 (i), (P) (0 < j)) <= 0, j); + n += a <= r; + M t = (M) (a / SHLV (c, 0)) != __builtin_bswap64 (i); + I u = f - v5; + E v = (E) h + (E) t + (E) k; + D w = (union { D b[2]; }) { }.b[0] + ((union { E b; }) v).b[1] + m + (D) u + (D) n + (D) s; + C x = ((union { D b; }) w).b[1] + b + (C) l + (C) v6; + B y = ((union { C a; B b; }) x).b + ((union { C a; B b[2]; }) x).b[1] + (B) p + (B) q; + J z = i + o; + F z2 = z; + A z3 = z2; + return y + z3; +} + +int +main () +{ + B x = foo (0, (C) { }, (E) { }, 10, (G) { }, 4, 2, (M) { }, 123842323652213865LL, (P) { 1 }, (Q) { }); + if ((J) x != 0x2e2c2e2c2e2c2e30ULL) + __builtin_abort(); + return 0; +} Jakub