From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 33498 invoked by alias); 16 Aug 2016 15:43:08 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 33471 invoked by uid 89); 16 Aug 2016 15:43:08 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.4 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD,SPF_HELO_PASS autolearn=ham version=3.3.2 spammy= X-HELO: mx1.redhat.com Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Tue, 16 Aug 2016 15:43:07 +0000 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 35A94C04D280; Tue, 16 Aug 2016 15:43:06 +0000 (UTC) Received: from localhost.localdomain (ovpn-116-13.phx2.redhat.com [10.3.116.13]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u7GFh5tL017802; Tue, 16 Aug 2016 11:43:05 -0400 Subject: Re: backward threading heuristics tweek To: Jan Hubicka References: <20160606101953.GC12313@kam.mff.cuni.cz> <20160808082830.GA14494@arm.com> <20160811123558.GB67433@kam.mff.cuni.cz> <20160815200609.GA26405@kam.mff.cuni.cz> Cc: James Greenhalgh , Andrew Pinski , GCC Patches , nd@arm.com From: Jeff Law Message-ID: <7f9fb38d-23a8-3de0-fe24-fed7d778d649@redhat.com> Date: Tue, 16 Aug 2016 15:43:00 -0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.2.0 MIME-Version: 1.0 In-Reply-To: <20160815200609.GA26405@kam.mff.cuni.cz> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-IsSubscribed: yes X-SW-Source: 2016-08/txt/msg01192.txt.bz2 On 08/15/2016 02:06 PM, Jan Hubicka wrote: >>> So the threaded path lives fully inside loop1: 6->8->9->3->4->6 propagating >>> that phi_inserted is 0 after the first iteration of the loop. This looks like >>> useful loop peeling oppurtunity which does not garble loop structure. So >>> perhaps threading paths starting and passing loop latch (i.e. peeling) is >>> sane? Perhaps all paths fully captured in the loop in question are? >> Peeling like this has long been a point of contention -- it totally >> mucks things up like vectorizing. >> >> The general issue that the threader knows nothing about the >> characteristics of the loop -- thus peeling is at this point is >> premature and just as likely to hinder performance as improve it. >> >> I'm never been happy with how this aspect of threading vs loop opts >> turned out and we have open BZs related to this rats nest of issues. > > Ok, then we perhaps just want to silence the testcase? We might. I'll have to take a closer look though. Which means I have to stop losing time to other things every day ;( jeff