From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id 72766384843F for ; Mon, 14 Jun 2021 14:50:12 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 72766384843F Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 89C811FB; Mon, 14 Jun 2021 07:50:11 -0700 (PDT) Received: from localhost (e121540-lin.manchester.arm.com [10.32.98.126]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9917F3F70D; Mon, 14 Jun 2021 07:50:10 -0700 (PDT) From: Richard Sandiford To: Tamar Christina Mail-Followup-To: Tamar Christina , gcc-patches@gcc.gnu.org, nd@arm.com, Richard.Earnshaw@arm.com, Marcus.Shawcroft@arm.com, Kyrylo.Tkachov@arm.com, richard.sandiford@arm.com Cc: gcc-patches@gcc.gnu.org, nd@arm.com, Richard.Earnshaw@arm.com, Marcus.Shawcroft@arm.com, Kyrylo.Tkachov@arm.com Subject: Re: [PATCH][RFC]AArch64 SVE: Fix multiple comparison masks on inverted operands References: Date: Mon, 14 Jun 2021 15:50:09 +0100 In-Reply-To: (Tamar Christina's message of "Mon, 14 Jun 2021 14:43:01 +0100") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain X-Spam-Status: No, score=-6.5 required=5.0 tests=BAYES_00, KAM_DMARC_STATUS, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 14 Jun 2021 14:50:14 -0000 Tamar Christina writes: > Hi All, > > This RFC is trying to address the following inefficiency when vectorizing > conditional statements with SVE. > > Consider the case > > void f10(double * restrict z, double * restrict w, double * restrict x, > double * restrict y, int n) > { > for (int i = 0; i < n; i++) { > z[i] = (w[i] > 0) ? x[i] + w[i] : y[i] - w[i]; > } > } > > > For which we currently generate at -O3: > > f10: > cmp w4, 0 > ble .L1 > mov x5, 0 > whilelo p1.d, wzr, w4 > ptrue p3.b, all > .L3: > ld1d z1.d, p1/z, [x1, x5, lsl 3] > fcmgt p2.d, p1/z, z1.d, #0.0 > fcmgt p0.d, p3/z, z1.d, #0.0 > ld1d z2.d, p2/z, [x2, x5, lsl 3] > bic p0.b, p3/z, p1.b, p0.b > ld1d z0.d, p0/z, [x3, x5, lsl 3] > fsub z0.d, p0/m, z0.d, z1.d > movprfx z0.d, p2/m, z1.d > fadd z0.d, p2/m, z0.d, z2.d > st1d z0.d, p1, [x0, x5, lsl 3] > incd x5 > whilelo p1.d, w5, w4 > b.any .L3 > .L1: > ret > > Notice that the condition for the else branch duplicates the same predicate as > the then branch and then uses BIC to negate the results. > > The reason for this is that during instruction generation in the vectorizer we > emit > > mask__41.11_66 = vect__4.10_64 > vect_cst__65; > vec_mask_and_69 = mask__41.11_66 & loop_mask_63; > vec_mask_and_71 = mask__41.11_66 & loop_mask_63; > mask__43.16_73 = ~mask__41.11_66; > vec_mask_and_76 = mask__43.16_73 & loop_mask_63; > vec_mask_and_78 = mask__43.16_73 & loop_mask_63; > > which ultimately gets optimized to > > mask__41.11_66 = vect__4.10_64 > { 0.0, ... }; > vec_mask_and_69 = loop_mask_63 & mask__41.11_66; > mask__43.16_73 = ~mask__41.11_66; > vec_mask_and_76 = loop_mask_63 & mask__43.16_73; > > Notice how the negate is on the operation and not the predicate resulting from > the operation. When this is expanded this turns into RTL where the negate is on > the compare directly. This means the RTL is different from the one without the > negate and so CSE is unable to recognize that they are essentially same operation. > > To fix this my patch changes it so you negate the mask rather than the operation > > mask__41.13_55 = vect__4.12_53 > { 0.0, ... }; > vec_mask_and_58 = loop_mask_52 & mask__41.13_55; > vec_mask_op_67 = ~vec_mask_and_58; > vec_mask_and_65 = loop_mask_52 & vec_mask_op_67; But to me this looks like a pessimisation in gimple terms. We've increased the length of the critical path: vec_mask_and_65 now needs a chain of 4 operations instead of 3. We also need to be careful not to pessimise the case in which the comparison is an integer one. At the moment we'll generate opposed conditions, which is the intended behaviour: .L3: ld1d z1.d, p0/z, [x1, x5, lsl 3] cmpgt p2.d, p0/z, z1.d, #0 movprfx z2, z1 scvtf z2.d, p3/m, z1.d cmple p1.d, p0/z, z1.d, #0 ld1d z0.d, p2/z, [x2, x5, lsl 3] ld1d z1.d, p1/z, [x3, x5, lsl 3] fadd z0.d, p2/m, z0.d, z2.d movprfx z0.d, p1/m, z1.d fsub z0.d, p1/m, z0.d, z2.d st1d z0.d, p0, [x0, x5, lsl 3] add x5, x5, x6 whilelo p0.d, w5, w4 b.any .L3 Could we handle the fcmp case using a 3->2 define_split instead: convert (set res (and (not (fcmp X Y)) Z)) -> (set res (fcmp X Y)) (set res (and (not res) Z)) ? Thanks, Richard