From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 60629 invoked by alias); 8 Nov 2018 16:56:00 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 60586 invoked by uid 89); 8 Nov 2018 16:55:59 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-0.9 required=5.0 tests=BAYES_00,KAM_LAZY_DOMAIN_SECURITY autolearn=no version=3.3.2 spammy=sk:get_ref X-HELO: foss.arm.com Received: from usa-sjc-mx-foss1.foss.arm.com (HELO foss.arm.com) (217.140.101.70) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Thu, 08 Nov 2018 16:55:57 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 156AB80D; Thu, 8 Nov 2018 08:55:56 -0800 (PST) Received: from [10.2.206.82] (e109742-lin.cambridge.arm.com [10.2.206.82]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7449B3F5CF; Thu, 8 Nov 2018 08:55:54 -0800 (PST) Subject: Re: [RFC][PATCH]Merge VEC_COND_EXPR into MASK_STORE after loop vectorization To: Richard Biener Cc: GCC Patches , Richard Sandiford , Ramana Radhakrishnan , James Greenhalgh References: From: Renlin Li Message-ID: Date: Thu, 08 Nov 2018 16:56:00 -0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit X-IsSubscribed: yes X-SW-Source: 2018-11/txt/msg00623.txt.bz2 Hi Richard, On 11/08/2018 12:09 PM, Richard Biener wrote: > On Thu, Nov 8, 2018 at 12:02 PM Renlin Li wrote: >> >> Hi all, >> >> When allow-store-data-races is enabled, ifcvt would prefer to generated >> conditional select and unconditional store to convert certain if statement >> into: >> >> _ifc_1 = val >> _ifc_2 = A[i] >> val = cond? _ifc_1 : _ifc_2 >> A[i] = val >> >> When the loop got vectorized, this pattern will be turned into >> MASK_LOAD, VEC_COND_EXPR and MASK_STORE. This could be improved. > > I'm somewhat confused - the vectorizer doesn't generate a masked store when > if-conversion didn't create one in the first place > > In particular with allow-store-data-races=1 (what your testcase uses) > there are no > masked loads/stores generated at all. So at least you need a better testcase > to motivate (one that doesn't load from array[i] so that we know the conditional > stores might trap). Thanks for trying this. The test case is a little bit simple and artificial. ifcvt won't generate mask_store, instead it will generate unconditional store with allow-store-data-races=1. My build is based on 25th Oct. I got the following IR from ifcvt with aarch64-none-elf-gcc -S -march=armv8-a+sve -O2 -ftree-vectorize --param allow-store-data-races=1 [local count: 1006632961]: # i_20 = PHI # ivtmp_18 = PHI a_10 = array[i_20]; _1 = a_10 & 1; _2 = a_10 + 1; _ifc__32 = array[i_20]; _ifc__33 = _2; _ifc__34 = _1 != 0 ? _ifc__33 : _ifc__32; array[i_20] = _ifc__34; prephitmp_8 = _1 != 0 ? _2 : a_10; _4 = a_10 + 2; _ifc__35 = array[i_20]; _ifc__36 = _4; _ifc__37 = prephitmp_8 > 10 ? _ifc__36 : _ifc__35; array[i_20] = _ifc__37; i_13 = i_20 + 1; ivtmp_5 = ivtmp_18 - 1; if (ivtmp_5 != 0) goto ; [93.33%] else goto ; [6.67%] *However*, after I rebased my patch on the latest trunk. Got the following dump from ifcvt: [local count: 1006632961]: # i_20 = PHI # ivtmp_18 = PHI a_10 = array[i_20]; _1 = a_10 & 1; _2 = a_10 + 1; _ifc__34 = _1 != 0 ? _2 : a_10; array[i_20] = _ifc__34; _4 = a_10 + 2; _ifc__37 = _ifc__34 > 10 ? _4 : _ifc__34; array[i_20] = _ifc__37; i_13 = i_20 + 1; ivtmp_5 = ivtmp_18 - 1; if (ivtmp_5 != 0) goto ; [93.33%] else goto ; [6.67%] the redundant load is not generated, but you could still see the unconditional store. After loop vectorization, the following is generated (without my change): vect_a_10.6_6 = .MASK_LOAD (vectp_array.4_35, 4B, loop_mask_7); a_10 = array[i_20]; vect__1.7_39 = vect_a_10.6_6 & vect_cst__38; _1 = a_10 & 1; vect__2.8_41 = vect_a_10.6_6 + vect_cst__40; _2 = a_10 + 1; vect__ifc__34.9_43 = VEC_COND_EXPR ; _ifc__34 = _1 != 0 ? _2 : a_10; .MASK_STORE (vectp_array.10_45, 4B, loop_mask_7, vect__ifc__34.9_43); vect__4.12_49 = vect_a_10.6_6 + vect_cst__48; _4 = a_10 + 2; vect__ifc__37.13_51 = VEC_COND_EXPR vect_cst__50, vect__4.12_49, vect__ifc__34.9_43>; _ifc__37 = _ifc__34 > 10 ? _4 : _ifc__34; .MASK_STORE (vectp_array.14_53, 4B, loop_mask_7, vect__ifc__37.13_51); With the old ifcvt code, my change here could improve it a little bit, eliminate some redundant load. With the new code, it could not improved it further. I'll adjust the patch based on the latest trunk. > > So what I see (with store data races not allowed) from ifcvt is when store data races is not allowed, we won't generate unconditional store. Instead ifcvt generates predicated store. That's what you showed here. As I mentioned, we could always make ifcvt generate mask_store as it should be always safe. But I don't know the performance implication on other targets (I assume there must be reasons why people write code to generate unconditional store when data-race is allowed? What I understand is that, this option allows the compiler to be more aggressive on optimization). The other reason is the data reference analysis. There might be versioned loop created with a more complexer test case. Again, I need to rebase and check my patch with the latest trunk, and need to come up with a better test case. > > [local count: 1006632961]: > # i_20 = PHI > # ivtmp_18 = PHI > a_10 = array[i_20]; > _1 = a_10 & 1; > _2 = a_10 + 1; > _32 = _1 != 0; > _33 = &array[i_20]; > .MASK_STORE (_33, 32B, _32, _2); > prephitmp_8 = _1 != 0 ? _2 : a_10; > _4 = a_10 + 2; > _34 = prephitmp_8 > 10; > .MASK_STORE (_33, 32B, _34, _4); > i_13 = i_20 + 1; > ivtmp_5 = ivtmp_18 - 1; > if (ivtmp_5 != 0) > > and what you want to do is merge > > prephitmp_8 = _1 != 0 ? _2 : a_10; > _34 = prephitmp_8 > 10; > > somehow? But your patch mentions that _4 should be prephitmp_8 so > it wouldn't do anything here? > >> The change here add a post processing function to combine the VEC_COND_EXPR >> expression into MASK_STORE, and delete related dead code. >> >> I am a little bit conservative here. >> I didn't change the default behavior of ifcvt to always generate MASK_STORE, >> although it should be safe in all cases (allow or dis-allow store data race). >> >> MASK_STORE might not well handled in vectorization pass compared with >> conditional select. It might be too early and aggressive to do that in ifcvt. >> And the performance of MASK_STORE might not good for some platforms. >> (We could add --param or target hook to differentiate this ifcvt behavior >> on different platforms) >> >> Another reason I did not do that in ifcvt is the data reference >> analysis. To create a MASK_STORE, a pointer is created as the first >> argument to the internal function call. If the pointer is created out of >> array references, e.g. x = &A[i], data reference analysis could not properly >> analysis the relationship between MEM_REF (x) and ARRAY_REF (A, i). This >> will create a versioned loop beside the vectorized one. > > Actually for your testcase it won't vectorize because there's compile-time > aliasing (somehow we miss handling of dependence distance zero?!) > >> (I have hacks to look through the MEM_REF, and restore the reference back to >> ARRAY_REF (A, i). Maybe we could do analysis on lowered address expression? >> I saw we have gimple laddress pass to aid the vectorizer) > > An old idea of mine is to have dependence analysis fall back to lowered address > form when it fails to match two references. This would require re-analysis and > eventually storing an alternate "inner reference" in the data-ref structure. Yes, that makes sense. My hack is in get_references_in_stmt. Look through the pointer to see where it is generated. This is only for mask_store as we know the first operand is original a pointer or a new pointer from something. Thanks, Renlin > >> The approach here comes a little bit late, on the condition that vector >> MASK_STORE is generated by loop vectorizer. In this case, it is definitely >> beneficial to do the code transformation. >> >> Any thoughts on the best way to fix the issue? >> >> >> This patch has been tested with aarch64-none-elf, no regressions. >> >> Regards, >> Renlin >> >> gcc/ChangeLog: >> >> 2018-11-08 Renlin Li >> >> * tree-vectorizer.h (combine_sel_mask_store): Declare new function. >> * tree-vect-loop.c (combine_sel_mask_store): Define new function. >> * tree-vectorizer.c (vectorize_loops): Call it. >> >> gcc/testsuite/ChangeLog: >> >> 2018-11-08 Renlin Li >> >> * gcc.target/aarch64/sve/combine_vcond_mask_store_1.c: New. >>