From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id CD8833858D28 for ; Thu, 29 Sep 2022 09:24:49 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org CD8833858D28 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 217F51A9A; Thu, 29 Sep 2022 02:24:56 -0700 (PDT) Received: from localhost (e121540-lin.manchester.arm.com [10.32.98.62]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id B90CC3F73B; Thu, 29 Sep 2022 02:24:48 -0700 (PDT) From: Richard Sandiford To: Richard Biener via Gcc-patches Mail-Followup-To: Richard Biener via Gcc-patches ,Andrew Stubbs , Richard Biener , juzhe.zhong@rivai.ai, richard.sandiford@arm.com Cc: Andrew Stubbs , Richard Biener , juzhe.zhong@rivai.ai Subject: Re: [PATCH] vect: while_ult for integer mask References: <87180de9-d0d4-b92f-405f-100aca3d5cf8@codesourcery.com> Date: Thu, 29 Sep 2022 10:24:47 +0100 In-Reply-To: (Richard Biener via Gcc-patches's message of "Thu, 29 Sep 2022 09:52:34 +0200") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain X-Spam-Status: No, score=-40.8 required=5.0 tests=BAYES_00,KAM_DMARC_NONE,KAM_DMARC_STATUS,KAM_LAZY_DOMAIN_SECURITY,SPF_HELO_NONE,SPF_NONE,TXREP autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: Richard Biener via Gcc-patches writes: > On Wed, Sep 28, 2022 at 5:06 PM Andrew Stubbs wrote: >> >> This patch is a prerequisite for some amdgcn patches I'm working on to >> support shorter vector lengths (having fixed 64 lanes tends to miss >> optimizations, and masking is not supported everywhere yet). >> >> The problem is that, unlike AArch64, I'm not using different mask modes >> for different sized vectors, so all loops end up using the while_ultsidi >> pattern, regardless of vector length. In theory I could use SImode for >> V32, HImode for V16, etc., but there's no mode to fit V4 or V2 so >> something else is needed. Moving to using vector masks in the backend >> is not a natural fit for GCN, and would be a huge task in any case. >> >> This patch adds an additional length operand so that we can distinguish >> the different uses in the back end and don't end up with more lanes >> enabled than there ought to be. >> >> I've made the extra operand conditional on the mode so that I don't have >> to modify the AArch64 backend; that uses while_ family of >> operators in a lot of places and uses iterators, so it would end up >> touching a lot of code just to add an inactive operand, plus I don't >> have a way to test it properly. I've confirmed that AArch64 builds and >> expands while_ult correctly in a simple example. >> >> OK for mainline? > > Hmm, but you could introduce BI4mode and BI2mode for V4 and V2, no? > Not sure if it is possible to have two partial integer modes and use those. Might be difficult to do cleanly, since BI is very much a special case. But I agree that that would better fit the existing scheme. Otherwise: operand0[0] = operand1 < operand2; for (i = 1; i < operand3; i++) operand0[i] = operand0[i - 1] && (operand1 + i < operand2); looks like a "length and mask" operation, which IIUC is also what RVV wanted? (Wasn't at the Cauldron, so not entirely sure.) Perhaps the difference is that in this case the length must be constant. (Or is that true for RVV as well?) Thanks, Richard