From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 25379 invoked by alias); 27 Jun 2008 13:02:59 -0000 Received: (qmail 25371 invoked by uid 22791); 27 Jun 2008 13:02:58 -0000 X-Spam-Check-By: sourceware.org Received: from smtp.ispras.ru (HELO smtp.ispras.ru) (83.149.198.201) by sourceware.org (qpsmtpd/0.31) with ESMTP; Fri, 27 Jun 2008 13:02:34 +0000 Received: from [83.149.198.220] (bonzo.ispras.ru [83.149.198.220]) by smtp.ispras.ru (Postfix) with ESMTP id B942A5D422B; Fri, 27 Jun 2008 16:59:26 +0400 (MSD) Message-ID: <4864E4EA.2040006@ispras.ru> Date: Fri, 27 Jun 2008 13:10:00 -0000 From: Andrey Belevantsev User-Agent: Thunderbird 2.0.0.14 (Windows/20080421) MIME-Version: 1.0 To: Ian Lance Taylor CC: GCC Patches , Jim Wilson , Vladimir Makarov Subject: Re: Selective scheduling pass - middle end changes [1/1] References: <4845522C.3010006@ispras.ru> <4845528D.6050302@ispras.ru> <484FD5B5.5040601@ispras.ru> In-Reply-To: Content-Type: text/plain; charset=KOI8-R; format=flowed Content-Transfer-Encoding: 7bit X-IsSubscribed: yes Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org X-SW-Source: 2008-06/txt/msg01739.txt.bz2 Hello Ian, Sorry for the delay in answer -- I've just got back after traveling (including the summit). I'm now working on fixing issues you've pointed out. Ian Lance Taylor wrote: > I would suggest that you have the ia64 machine_reorg pass call > compute_alignments itself. Admittedly compute_alignments will be run > twice for the ia64, but it should be a fairly fast pass--it loops > through all the basic blocks, but not through all the insns. I will try that. > Unfortunately, there is no mapping from the UID to the insn. I was > thinking of, e.g., using the UID to scale array sizes. > > If you look at haifa-sched.c, you'll see that it uses calls like > redirect_edge_succ, generates branch insns itself, and calls > extend_global (a haifa-sched.c) function to build information about > the insn. Is it reasonable for your code to work at that level? That would require reimplementing e.g. split_edge and redirect_edge_and_branch inside the scheduler, so we can see the actual insn created. I don't think this is reasonable. If you're uncomfortable with the idea of the hook, I can invent something along the lines of searching the new jumps in the code and passing them to the initialization routines. This would effectively find insns given their UIDs and the knowledge that they has got created somewhere near the given point in the CFG. I think this will not happen too often to have significant effects on compile time. The hook seemed to be just the simpler way of doing this. > Since you have data about all insns, don't you also need data about > insns which have changed or are deleted? Not quite. We always change insn's uid when its pattern was changed (which is also happens not very often). Dependence caches used for on-the-fly analysis rely on this as they use UIDs as a key. Overall, the data is maintained valid only for insns that are actually in the insn stream, as we only either collect them as possible scheduling candidates or propagate through them. The data for deleted insns remains in the array and gets freed after the current region has been scheduled. Andrey