From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 47611 invoked by alias); 14 Jan 2019 18:49:43 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 47601 invoked by uid 89); 14 Jan 2019 18:49:43 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-10.6 required=5.0 tests=BAYES_00,GIT_PATCH_2,GIT_PATCH_3,KHOP_DYNAMIC,RCVD_IN_DNSWL_LOW,SPF_PASS autolearn=ham version=3.3.2 spammy=lst, Anyway, gcc_assert X-HELO: mx0a-001b2d01.pphosted.com Received: from mx0b-001b2d01.pphosted.com (HELO mx0a-001b2d01.pphosted.com) (148.163.158.5) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Mon, 14 Jan 2019 18:49:41 +0000 Received: from pps.filterd (m0098421.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id x0EIiEuD019813 for ; Mon, 14 Jan 2019 13:49:39 -0500 Received: from e31.co.us.ibm.com (e31.co.us.ibm.com [32.97.110.149]) by mx0a-001b2d01.pphosted.com with ESMTP id 2q0x0wngbt-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Mon, 14 Jan 2019 13:49:39 -0500 Received: from localhost by e31.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 14 Jan 2019 18:49:38 -0000 Received: from b03cxnp07029.gho.boulder.ibm.com (9.17.130.16) by e31.co.us.ibm.com (192.168.1.131) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Mon, 14 Jan 2019 18:49:35 -0000 Received: from b03ledav004.gho.boulder.ibm.com (b03ledav004.gho.boulder.ibm.com [9.17.130.235]) by b03cxnp07029.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x0EInYT117957080 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Mon, 14 Jan 2019 18:49:35 GMT Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id A881F78060; Mon, 14 Jan 2019 18:49:34 +0000 (GMT) Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 6A8BF7805E; Mon, 14 Jan 2019 18:49:34 +0000 (GMT) Received: from ragesh4.local (unknown [9.211.145.145]) by b03ledav004.gho.boulder.ibm.com (Postfix) with ESMTP; Mon, 14 Jan 2019 18:49:34 +0000 (GMT) Subject: Re: [PATCH][rs6000] avoid using unaligned vsx or lxvd2x/stxvd2x for memcpy/memmove inline expansion To: Segher Boessenkool Cc: GCC Patches , David Edelsohn , Bill Schmidt References: <0a17416b-57a0-99e7-2e7e-90a63da66fe6@linux.ibm.com> <20181220095119.GP3803@gate.crashing.org> <30fd466c-43c7-86aa-81f2-181a9d9ca7fc@linux.ibm.com> <20181220234402.GX3803@gate.crashing.org> From: Aaron Sawdey Date: Mon, 14 Jan 2019 18:49:00 -0000 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.13; rv:60.0) Gecko/20100101 Thunderbird/60.4.0 MIME-Version: 1.0 In-Reply-To: <20181220234402.GX3803@gate.crashing.org> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit x-cbid: 19011418-8235-0000-0000-00000E4AF0D4 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00010405; HX=3.00000242; KW=3.00000007; PH=3.00000004; SC=3.00000274; SDB=6.01146462; UDB=6.00597113; IPR=6.00926748; MB=3.00025124; MTD=3.00000008; XFM=3.00000015; UTC=2019-01-14 18:49:37 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19011418-8236-0000-0000-000044149A6A Message-Id: <578b94c0-d1d4-46d5-25d5-7077c306c3ea@linux.ibm.com> X-IsSubscribed: yes X-SW-Source: 2019-01/txt/msg00793.txt.bz2 The patch for this was committed to trunk as 267562 (see below). Is this also ok for backport to 8? Thanks, Aaron On 12/20/18 5:44 PM, Segher Boessenkool wrote: > On Thu, Dec 20, 2018 at 05:34:54PM -0600, Aaron Sawdey wrote: >> On 12/20/18 3:51 AM, Segher Boessenkool wrote: >>> On Wed, Dec 19, 2018 at 01:53:05PM -0600, Aaron Sawdey wrote: >>>> Because of POWER9 dd2.1 issues with certain unaligned vsx instructions >>>> to cache inhibited memory, here is a patch that keeps memmove (and memcpy) >>>> inline expansion from doing unaligned vector or using vector load/store >>>> other than lvx/stvx. More description of the issue is here: >>>> >>>> https://patchwork.ozlabs.org/patch/814059/ >>>> >>>> OK for trunk if bootstrap/regtest ok? >>> >>> Okay, but see below. >>> >> [snip] >>> >>> This is extraordinarily clumsy :-) Maybe something like: >>> >>> static rtx >>> gen_lvx_v4si_move (rtx dest, rtx src) >>> { >>> gcc_assert (!(MEM_P (dest) && MEM_P (src)); >>> gcc_assert (GET_MODE (dest) == V4SImode && GET_MODE (src) == V4SImode); >>> if (MEM_P (dest)) >>> return gen_altivec_stvx_v4si_internal (dest, src); >>> else if (MEM_P (src)) >>> return gen_altivec_lvx_v4si_internal (dest, src); >>> else >>> gcc_unreachable (); >>> } >>> >>> (Or do you allow VOIDmode for src as well?) Anyway, at least get rid of >>> the useless extra variable. >> >> I think this should be better: > > The gcc_unreachable at the end catches the non-mem to non-mem case. > >> static rtx >> gen_lvx_v4si_move (rtx dest, rtx src) >> { >> gcc_assert ((MEM_P (dest) && !MEM_P (src)) || (MEM_P (src) && !MEM_P(dest))); > > But if you prefer this, how about > > { > gcc_assert (MEM_P (dest) ^ MEM_P (src)); > gcc_assert (GET_MODE (dest) == V4SImode && GET_MODE (src) == V4SImode); > > if (MEM_P (dest)) > return gen_altivec_stvx_v4si_internal (dest, src); > else > return gen_altivec_lvx_v4si_internal (dest, src); > } > > :-) > > > Segher > 2019-01-03 Aaron Sawdey * config/rs6000/rs6000-string.c (expand_block_move): Don't use unaligned vsx and avoid lxvd2x/stxvd2x. (gen_lvx_v4si_move): New function. Index: gcc/config/rs6000/rs6000-string.c =================================================================== --- gcc/config/rs6000/rs6000-string.c (revision 267299) +++ gcc/config/rs6000/rs6000-string.c (working copy) @@ -2669,6 +2669,25 @@ return true; } +/* Generate loads and stores for a move of v4si mode using lvx/stvx. + This uses altivec_{l,st}vx__internal which use unspecs to + keep combine from changing what instruction gets used. + + DEST is the destination for the data. + SRC is the source of the data for the move. */ + +static rtx +gen_lvx_v4si_move (rtx dest, rtx src) +{ + gcc_assert (MEM_P (dest) ^ MEM_P (src)); + gcc_assert (GET_MODE (dest) == V4SImode && GET_MODE (src) == V4SImode); + + if (MEM_P (dest)) + return gen_altivec_stvx_v4si_internal (dest, src); + else + return gen_altivec_lvx_v4si_internal (dest, src); +} + /* Expand a block move operation, and return 1 if successful. Return 0 if we should let the compiler generate normal code. @@ -2721,11 +2740,11 @@ /* Altivec first, since it will be faster than a string move when it applies, and usually not significantly larger. */ - if (TARGET_ALTIVEC && bytes >= 16 && (TARGET_EFFICIENT_UNALIGNED_VSX || align >= 128)) + if (TARGET_ALTIVEC && bytes >= 16 && align >= 128) { move_bytes = 16; mode = V4SImode; - gen_func.mov = gen_movv4si; + gen_func.mov = gen_lvx_v4si_move; } else if (bytes >= 8 && TARGET_POWERPC64 && (align >= 64 || !STRICT_ALIGNMENT)) -- Aaron Sawdey, Ph.D. acsawdey@linux.vnet.ibm.com 050-2/C113 (507) 253-7520 home: 507/263-0782 IBM Linux Technology Center - PPC Toolchain