From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by sourceware.org (Postfix) with ESMTPS id 5F5E93858D39 for ; Wed, 31 Aug 2022 14:38:23 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 5F5E93858D39 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-pl1-x62a.google.com with SMTP id d12so14306025plr.6 for ; Wed, 31 Aug 2022 07:38:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:in-reply-to:from:references:to :content-language:subject:user-agent:mime-version:date:message-id :from:to:cc; bh=GJ1Xx+Lsf3SuYlC2nDs+YtRntFKqQ78Hoo4fHCGmn0Y=; b=lkfPhkGq3KWg7ywS0RYGYFZJacoehr8GtA71M+zPlA/YShZba8sjuo6ihZp/mxKST6 QDmiebWa8xoEPpMyXflkMTRO5CtvRkmAO1XYAuqKVKEH/p3JYsZ1QP4stXJknZ+E/I0X 8TKu2Ad1RKbqhSdl4VRI/b73peDxo7d1h5udVDMI6qlvmfI5rdvCRbZa4lAD5rLqU469 9Zym73uQ+rYS0kXMuyU6UBndmHTvOoXsvO/ljzh2ic5cljaGoPLHs6Oj3t0llXxW9tRR fVw4SDM4uI9xWq+bdGlFJfjG/LKYK2LFHueWsQwN0aKPJQwA3OvJFGQged4sn8oS/qlK sGGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:in-reply-to:from:references:to :content-language:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc; bh=GJ1Xx+Lsf3SuYlC2nDs+YtRntFKqQ78Hoo4fHCGmn0Y=; b=75JfrOotyhMVfEKDhqx0nc8EMjpT9sDJBQc9wNCdvcy4m0pohpkXZ8nLFqx3a0d/US 9p7pvAVDg81PEnhNqKjH7OTAitAJluf6PqjZs3r9zMvkE8mf+j6i+cMe23lcuCyWnfGn sI0iNB7z47a9idoXL9ogte9zCVrUz+kv40BgORgUqmQhJDL2ihRfumLX7cxnEvZ6Stt0 UhLSNQMuGErzFbTM5nXWGldZIqCT1BCOW8IGXCQ6jx/lwb6xnJEuBt/Lg0XzO0xqur9X cwZcfgo3SkFuks8oCihX1MgSjdyWW+v9SV5vI5hS4OnWiUzQ680NH4QrNaofZUOKxIkl co1g== X-Gm-Message-State: ACgBeo0ME42gIjd1FmCaP7ABjsXQZC4ZCtI6+5FPseS1TIra2u92Rkvu FZ4DNRsCi2t7gNlXmZ113eWqqkxznOo= X-Google-Smtp-Source: AA6agR4FDtKF38knsebQi2YNuWzgwvORnUeitSRTVnK0VUpvKCK5Lkd8T9TYOTmLdx6dqTROQahcqQ== X-Received: by 2002:a17:903:22c1:b0:16e:fbcc:f298 with SMTP id y1-20020a17090322c100b0016efbccf298mr26602769plg.2.1661956701778; Wed, 31 Aug 2022 07:38:21 -0700 (PDT) Received: from [172.31.0.204] (c-73-98-188-51.hsd1.ut.comcast.net. [73.98.188.51]) by smtp.gmail.com with ESMTPSA id g12-20020a17090a714c00b001fd674057d2sm1363319pjs.48.2022.08.31.07.38.20 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 31 Aug 2022 07:38:21 -0700 (PDT) Message-ID: <75641029-743b-3e8f-836f-f00020a6c1e3@gmail.com> Date: Wed, 31 Aug 2022 08:38:19 -0600 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.13.0 Subject: Re: [PATCH 6/6] Extend SLP permutation optimisations Content-Language: en-US To: Jeff Law via Gcc-patches , richard.sandiford@arm.com References: <1106b505-65b0-4681-c53b-fe7579490e92@gmail.com> From: Jeff Law In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.2 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,NICE_REPLY_A,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: On 8/30/2022 8:50 AM, Richard Sandiford wrote: > Jeff Law via Gcc-patches writes: >> On 8/25/2022 7:07 AM, Richard Sandiford via Gcc-patches wrote: >>> Currently SLP tries to force permute operations "down" the graph >>> from loads in the hope of reducing the total number of permutations >>> needed or (in the best case) removing the need for the permutations >>> entirely. This patch tries to extend it as follows: >>> >>> - Allow loads to take a different permutation from the one they >>> started with, rather than choosing between "original permutation" >>> and "no permutation". >>> >>> - Allow changes in both directions, if the target supports the >>> reverse permutation. >>> >>> - Treat the placement of permutations as a two-way dataflow problem: >>> after propagating information from leaves to roots (as now), propagate >>> information back up the graph. >>> >>> - Take execution frequency into account when optimising for speed, >>> so that (for example) permutations inside loops have a higher >>> cost than permutations outside loops. >>> >>> - Try to reduce the total number of permutations when optimising for >>> size, even if that increases the number of permutations on a given >>> execution path. >>> >>> See the big block comment above vect_optimize_slp_pass for >>> a detailed description. >>> >>> The original motivation for doing this was to add a framework that would >>> allow other layout differences in future. The two main ones are: >>> >>> - Make it easier to represent predicated operations, including >>> predicated operations with gaps. E.g.: >>> >>> a[0] += 1; >>> a[1] += 1; >>> a[3] += 1; >>> >>> could be a single load/add/store for SVE. We could handle this >>> by representing a layout such as { 0, 1, _, 2 } or { 0, 1, _, 3 } >>> (depending on what's being counted). We might need to move >>> elements between lanes at various points, like with permutes. >>> >>> (This would first mean adding support for stores with gaps.) >>> >>> - Make it easier to switch between an even/odd and unpermuted layout >>> when switching between wide and narrow elements. E.g. if a widening >>> operation produces an even vector and an odd vector, we should try >>> to keep operations on the wide elements in that order rather than >>> force them to be permuted back "in order". >> Very cool.  Richi and I discussed this a bit a year or so ago -- >> basically noting that bi-directional movement is really the way to go >> and that the worst thing to do is push a permute down into the *middle* >> of a computation chain since that will tend to break FMA generation. >> Moving to the loads or stores or to another permute point ought to be >> the goal. > Hmm, I hadn't thought specifically about the case of permutes > ending up in the middle of a fusable operation. The series doesn't > address that directly. If we have: > > permute(a) * permute(b) + c > > then the tendency will still be to convert that into: > > permute(a * b) + c > > Damn. Another case to think about ;-) > > I've pushed the series for now though (thanks to Richi for the reviews). There's a simple testcase attached to PR101895 which shows an example where (in the gcc-11 era) we pushed a permute down in a problematic way.   It might be worth taking a looksie, though I think we're avoiding the problem behavior via a workaround on the trunk right now. Jeff