From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 102985 invoked by alias); 23 May 2019 12:11:33 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 102977 invoked by uid 89); 23 May 2019 12:11:33 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-18.4 required=5.0 tests=AWL,BAYES_00,GIT_PATCH_0,GIT_PATCH_1,GIT_PATCH_2,GIT_PATCH_3,SPF_PASS autolearn=ham version=3.3.1 spammy=single-use, singleuse X-HELO: mx1.suse.de Received: from mx2.suse.de (HELO mx1.suse.de) (195.135.220.15) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Thu, 23 May 2019 12:11:31 +0000 Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 3C1BAAF8D; Thu, 23 May 2019 12:11:29 +0000 (UTC) Date: Thu, 23 May 2019 12:11:00 -0000 From: Richard Biener To: Jiufu Guo cc: gcc-patches@gcc.gnu.org, Jakub Jelinek , Daniel Berlin , segher@kernel.crashing.org, wschmidt@linux.ibm.com, law@redhat.com Subject: Re: [PATCH] A jump threading opportunity for condition branch In-Reply-To: Message-ID: References: <1558446288-52444-1-git-send-email-guojiufu@linux.ibm.com> User-Agent: Alpine 2.20 (LSU 67 2015-01-07) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-SW-Source: 2019-05/txt/msg01589.txt.bz2 On Thu, 23 May 2019, Jiufu Guo wrote: > Hi, > > Richard Biener writes: > > > On Tue, 21 May 2019, Jiufu Guo wrote: > > > >> Hi, > >> > >> This patch implements a new opportunity of jump threading for PR77820. > >> In this optimization, conditional jumps are merged with unconditional jump. > >> And then moving CMP result to GPR is eliminated. > >> > >> It looks like below: > >> > >> > >> p0 = a CMP b > >> goto ; > >> > >> > >> p1 = c CMP d > >> goto ; > >> > >> > >> # phi = PHI > >> if (phi != 0) goto ; else goto ; > >> > >> Could be transformed to: > >> > >> > >> p0 = a CMP b > >> if (p0 != 0) goto ; else goto ; > >> > >> > >> p1 = c CMP d > >> if (p1 != 0) goto ; else goto ; > >> > >> > >> This optimization eliminates: > >> 1. saving CMP result: p0 = a CMP b. > >> 2. additional CMP on branch: if (phi != 0). > >> 3. converting CMP result if there is phi = (INT_CONV) p0 if there is. > >> > >> Bootstrapped and tested on powerpc64le with no regressions(one case is improved) > >> and new testcases are added. Is this ok for trunk? > >> > >> Thanks! > >> Jiufu Guo > >> > ... > >> diff --git a/gcc/tree-ssa-threadedge.c b/gcc/tree-ssa-threadedge.c > >> index c3ea2d6..23000f6 100644 > >> --- a/gcc/tree-ssa-threadedge.c > >> +++ b/gcc/tree-ssa-threadedge.c > >> @@ -1157,6 +1157,90 @@ thread_through_normal_block (edge e, > >> return 0; > >> } > >> > >> +/* Return true if PHI's INDEX-th incoming value is a CMP, and the CMP is > >> + defined in the incoming basic block. Otherwise return false. */ > >> +static bool > >> +cmp_from_unconditional_block (gphi *phi, int index) > >> +{ > >> + tree value = gimple_phi_arg_def (phi, index); > >> + if (!(TREE_CODE (value) == SSA_NAME && has_single_use (value))) > >> + return false; > > > > Not sure why we should reject a constant here but I guess we > > expect it to find a simplified condition anyways ;) > > > Const could be accepted here, like "# t_9 = PHI <5(3), t_17(4)>". I > found this case is already handled by other jump-threading code, like > 'ethread' pass. > > >> + > >> + gimple *def = SSA_NAME_DEF_STMT (value); > >> + > >> + if (!is_gimple_assign (def)) > >> + return false; > >> + > >> + if (CONVERT_EXPR_CODE_P (gimple_assign_rhs_code (def))) > >> + { > >> + value = gimple_assign_rhs1 (def); > >> + if (!(TREE_CODE (value) == SSA_NAME && has_single_use (value))) > >> + return false; > >> + > >> + def = SSA_NAME_DEF_STMT (value); > >> + > >> + if (!is_gimple_assign (def)) > >> + return false; > > > > too much vertial space. > > > Thanks, I will refine it. > >> + } > >> + > >> + if (TREE_CODE_CLASS (gimple_assign_rhs_code (def)) != tcc_comparison) > >> + return false; > >> + > >> + /* Check if phi's incoming value is defined in the incoming basic_block. */ > >> + edge e = gimple_phi_arg_edge (phi, index); > >> + if (def->bb != e->src) > >> + return false; > > > > why does this matter? > > > Through preparing pathes and duplicating block, this transform can also > help to combine a cmp in previous block and a gcond in current block. > "if (def->bb != e->src)" make sure the cmp is define in the incoming > block of the current; and then combining "cmp with gcond" is safe. If > the cmp is defined far from the incoming block, it would be hard to > achieve the combining, and the transform may not needed. We're in SSA form so the "combining" doesn't really care where the definition comes from. > >> + > >> + if (!single_succ_p (def->bb)) > >> + return false; > > > > Or this? The actual threading will ensure this will hold true. > > > Yes, other thread code check this and ensure it to be true, like > function thread_through_normal_block. Since this new function is invoked > outside thread_through_normal_block, so, checking single_succ_p is also > needed for this case. I mean threading will isolate the path making this trivially true. It's also no requirement for combining, in fact due to the single-use check the definition can be sinked across the edge already (if the edges dest didn't have multiple predecessors which this threading will fix as well). > >> + return true; > >> +} > >> + > >> +/* There are basic blocks look like: > >> + > >> + p0 = a CMP b ; or p0 = (INT)( a CMP b) > >> + goto ; > >> + > >> + > >> + p1 = c CMP d > >> + goto ; > >> + > >> + > >> + # phi = PHI > >> + if (phi != 0) goto ; else goto ; > >> + > >> + Then, : a trivial join block. > >> + > >> + Check if BB is in like above. */ > >> + > >> +bool > >> +is_trivial_join_block (basic_block bb) > > > > I'd make this work on a specific edge. > > > > edge_forwards_conditional_to_conditional_jump_through_empty_bb_p (edge e) > > { > > basic_block b = e->dest; > > > > maybe too elaborate name ;) > > > Thanks for help to name the function! It is very valuable for me ;) > >> +{ > >> + gimple *gs = last_and_only_stmt (bb); > >> + if (gs == NULL) > >> + return false; > >> + > >> + if (gimple_code (gs) != GIMPLE_COND) > >> + return false; > >> + > >> + tree cond = gimple_cond_lhs (gs); > >> + > >> + if (TREE_CODE (cond) != SSA_NAME) > >> + return false; > > > > space after if( too much vertical space in this function > > for my taste btw. > Will update this. > > > > For the forwarding to work we want a NE_EXPR or EQ_EXPR > > as gimple_cond_code and integer_one_p or integer_zero_p > > gimple_cond_rhs. > Right, checking those would be more safe. Since no issue found, during > bootstrap and regression tests, so I did not add these checking. I will > add this checking. > > > >> + > >> + if (gimple_code (SSA_NAME_DEF_STMT (cond)) != GIMPLE_PHI) > >> + return false; > >> + > >> + gphi *phi = as_a (SSA_NAME_DEF_STMT (cond)); > > > > I think to match your pattern you want to check that > > gimple_bb (phi) == bb as well here. > Right, it should be checked. I will update. > > > >> + for (unsigned int i = 0; i < phi->nargs; i++) > >> + if (!cmp_from_unconditional_block (phi, i)) > > > > Just process the incoming edge argument and inline the > > helper. You can use PHI_ARG_DEF_FROM_EDGE here. > I will refine code, and try to use it. > > > > Thanks for integrating this into jump-threading - it does look > > like a good fit. > > > > How often does this trigger during bootstrap? > Thanks for your sugguestion, this could help to evaluate patch. During > bootstrap(stage 2 or 3), in gcc source code, 1300-1500 basic blocks are > fullfile this tranform. Thanks, Richard.