From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 71457 invoked by alias); 25 May 2018 09:49:07 -0000 Mailing-List: contact gcc-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-owner@gcc.gnu.org Received: (qmail 71445 invoked by uid 89); 25 May 2018 09:49:06 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.7 required=5.0 tests=AWL,BAYES_00,FREEMAIL_FROM,RCVD_IN_DNSWL_NONE,SPF_PASS autolearn=ham version=3.3.2 spammy=afterwards, Hmm, H*i:sk:CAAgBjM, Richi X-HELO: mail-it0-f52.google.com Received: from mail-it0-f52.google.com (HELO mail-it0-f52.google.com) (209.85.214.52) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Fri, 25 May 2018 09:49:04 +0000 Received: by mail-it0-f52.google.com with SMTP id p3-v6so6070133itc.0 for ; Fri, 25 May 2018 02:49:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=XxGtDmh7KTC+o6bCo4UufG8trQhzWIDC1EClTlBHGiU=; b=UQ1MnbVd2SAsaa7tEPWCk/U0rRTbA+sdOjBN2TDrBBVt5tTm9Dnv0JeES7sYSJJ7GR Ly7Ccfy7XavTcxokeOOn4Wu1uIj+Aa66V75mpbk50gIFfWH/gWD1jPC2ibs9CzsdpF9m 0NOG9BoLQD04P5ithyCBnOK3qJOGU4PUrZ5kZDL93xMF5V4fpVNksGu2DTZeo/f3Inrp g0oU8+NmixVSMejocCxD/BOnV3ybxD1zwywn3AyyGfl0I/gSNySWxeo5v3PuGW7rDNim ig+j3abxgBCu70fFVb/vqMtVNA/NXceqJ7CH/1hE0Nyip4go4cevT4Vj5+QVCuqzqdFP okZQ== X-Gm-Message-State: ALKqPwf6446v38GJDzbWg2VsrsxfEXUABuqvuT7WXw4oslli+gBcRvpW 5DpQJSs8chhcL5om12MIm8fbKCLJdm7rOdZ9PLg= X-Google-Smtp-Source: ADUXVKLDIPQLd4hMADXZHzcg7Jo50bPCUsKEpPS51B5GbaCbW9N7a856UXE5lhJXOKPCucVUcTmFo0RXAfrtyMZMNHc= X-Received: by 2002:a24:1085:: with SMTP id 127-v6mr1206147ity.57.1527241743012; Fri, 25 May 2018 02:49:03 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:a02:5346:0:0:0:0:0 with HTTP; Fri, 25 May 2018 02:49:02 -0700 (PDT) In-Reply-To: References: <014f7b2a-3c64-4144-37a4-4cc7bdff3d47@redhat.com> From: "Bin.Cheng" Date: Fri, 25 May 2018 09:49:00 -0000 Message-ID: Subject: Re: PR80155: Code hoisting and register pressure To: Prathamesh Kulkarni Cc: Jeff Law , Richard Biener , GCC Development , Thomas Preudhomme Content-Type: text/plain; charset="UTF-8" X-IsSubscribed: yes X-SW-Source: 2018-05/txt/msg00227.txt.bz2 On Fri, May 25, 2018 at 10:23 AM, Prathamesh Kulkarni wrote: > On 23 May 2018 at 18:37, Jeff Law wrote: >> On 05/23/2018 03:20 AM, Prathamesh Kulkarni wrote: >>> On 23 May 2018 at 13:58, Richard Biener wrote: >>>> On Wed, 23 May 2018, Prathamesh Kulkarni wrote: >>>> >>>>> Hi, >>>>> I am trying to work on PR80155, which exposes a problem with code >>>>> hoisting and register pressure on a leading embedded benchmark for ARM >>>>> cortex-m7, where code-hoisting causes an extra register spill. >>>>> >>>>> I have attached two test-cases which (hopefully) are representative of >>>>> the original test-case. >>>>> The first one (trans_dfa.c) is bigger and somewhat similar to the >>>>> original test-case and trans_dfa_2.c is hand-reduced version of >>>>> trans_dfa.c. There's 2 spills caused with trans_dfa.c >>>>> and one spill with trans_dfa_2.c due to lesser amount of cases. >>>>> The test-cases in the PR are probably not relevant. >>>>> >>>>> Initially I thought the spill was happening because of "too many >>>>> hoistings" taking place in original test-case thus increasing the >>>>> register pressure, but it seems the spill is possibly caused because >>>>> expression gets hoisted out of a block that is on loop exit. >>>>> >>>>> For example, the following hoistings take place with trans_dfa_2.c: >>>>> >>>>> (1) Inserting expression in block 4 for code hoisting: >>>>> {mem_ref<0B>,tab_20(D)}@.MEM_45 (0005) >>>>> >>>>> (2) Inserting expression in block 4 for code hoisting: {plus_expr,_4,1} (0006) >>>>> >>>>> (3) Inserting expression in block 4 for code hoisting: >>>>> {pointer_plus_expr,s_33,1} (0023) >>>>> >>>>> (4) Inserting expression in block 3 for code hoisting: >>>>> {pointer_plus_expr,s_33,1} (0023) >>>>> >>>>> The issue seems to be hoisting of (*tab + 1) which consists of first >>>>> two hoistings in block 4 >>>>> from blocks 5 and 9, which causes the extra spill. I verified that by >>>>> disabling hoisting into block 4, >>>>> which resulted in no extra spills. >>>>> >>>>> I wonder if that's because the expression (*tab + 1) is getting >>>>> hoisted from blocks 5 and 9, >>>>> which are on loop exit ? So the expression that was previously >>>>> computed in a block on loop exit, gets hoisted outside that block >>>>> which possibly makes the allocator more defensive ? Similarly >>>>> disabling hoisting of expressions which appeared in blocks on loop >>>>> exit in original test-case prevented the extra spill. The other >>>>> hoistings didn't seem to matter. >>>> >>>> I think that's simply co-incidence. The only thing that makes >>>> a block that also exits from the loop special is that an >>>> expression could be sunk out of the loop and hoisting (commoning >>>> with another path) could prevent that. But that isn't what is >>>> happening here and it would be a pass ordering issue as >>>> the sinking pass runs only after hoisting (no idea why exactly >>>> but I guess there are cases where we want to prefer CSE over >>>> sinking). So you could try if re-ordering PRE and sinking helps >>>> your testcase. >>> Thanks for the suggestions. Placing sink pass before PRE works >>> for both these test-cases! Sadly it still causes the spill for the benchmark -:( >>> I will try to create a better approximation of the original test-case. >>>> >>>> What I do see is a missed opportunity to merge the successors >>>> of BB 4. After PRE we have >>>> >>>> [local count: 159303558]: >>>> : >>>> pretmp_123 = *tab_37(D); >>>> _87 = pretmp_123 + 1; >>>> if (c_36 == 65) >>>> goto ; [34.00%] >>>> else >>>> goto ; [66.00%] >>>> >>>> [local count: 54163210]: >>>> *tab_37(D) = _87; >>>> _96 = MEM[(char *)s_57 + 1B]; >>>> if (_96 != 0) >>>> goto ; [89.00%] >>>> else >>>> goto ; [11.00%] >>>> >>>> [local count: 105140348]: >>>> *tab_37(D) = _87; >>>> _56 = MEM[(char *)s_57 + 1B]; >>>> if (_56 != 0) >>>> goto ; [89.00%] >>>> else >>>> goto ; [11.00%] >>>> >>>> here at least the stores and loads can be hoisted. Note this >>>> may also point at the real issue of the code hoisting which is >>>> tearing apart the RMW operation? >>> Indeed, this possibility seems much more likely than block being on loop exit. >>> I will try to "hardcode" the load/store hoists into block 4 for this >>> specific test-case to check >>> if that prevents the spill. >> Even if it prevents the spill in this case, it's likely a good thing to >> do. The statements prior to the conditional in bb5 and bb8 should be >> hoisted, leaving bb5 and bb8 with just their conditionals. > Hi, > It seems disabling forwprop somehow works for causing no extra spills > on the original test-case. > > For instance, > Hoisting without forwprop: > > bb 3: > _1 = tab_1(D) + 8 > pretmp_268 = MEM[tab_1(D) + 8B]; > _2 = pretmp_268 + 1; > goto or > > bb 4: > *_1 = _ 2 > > bb 5: > *_1 = _2 > > Hoisting with forwprop: > > bb 3: > pretmp_164 = MEM[tab_1(D) + 8B]; > _2 = pretmp_164 + 1 > goto or > > bb 4: > MEM[tab_1(D) + 8] = _2; > > bb 5: > MEM[tab_1(D) + 8] = _2; > > Although in both cases, we aren't hoisting stores, the issues with forwprop > for this case seems to be the folding of > *_1 = _2 > into > MEM[tab_1(D) + 8] = _2 ? This isn't an issue, right? IIUC, tab_1(D) used all over the loop thus propagating _1 using (tab_1(D) + 8) actually removes one live range. > > Disabling folding to mem_ref[base + offset] in forwprop "works" in the > sense it created same set of hoistings as without forwprop, however it > still results in additional spills (albeit different registers). > > That's because forwprop seems to be increasing live range of > prephitmp_217 by substituting > _221 + 1 with prephitmp_217 + 2 (_221 is defined as prephitmp_217 + 1). Hmm, it's hard to discuss private benchmarks, not sure which dump shall I find prephitmp_221/prephitmp_217 stuff. > On the other hand, Bin pointed out to me in private that forwprop also > helps to restrict register pressure by propagating "tab + const_int" > for same test-case. > > So I am not really sure if there's an easier fix than having > heuristics for estimating register pressure at TREE level ? I would be Easy fix, maybe not. OTOH, I am more convinced passes like forwprop/sink/hoisting can be improved by taking live range into consideration. Specifically, to direct such passes when moving code around different basic blocks, because inter-block register pressure is hard to resolve afterwards. As suggested by Jeff and Richi, I guess the first step would be doing experiments, collecting more benchmark data for reordering sink before pre? It enables code sink as well as decreases register pressure in the original reduced cases IIRC. Thanks, bin > grateful for suggestions on how to proceed from here. > Thanks! > > Regards, > Prathamesh >> >> Jeff