From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 43575 invoked by alias); 20 Dec 2019 01:32:22 -0000 Mailing-List: contact gcc-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-owner@gcc.gnu.org Received: (qmail 43566 invoked by uid 89); 20 Dec 2019 01:32:22 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00,FREEMAIL_FROM,RCVD_IN_DNSWL_NONE,SPF_PASS autolearn=ham version=3.3.1 spammy=H*i:sk:f108ceb, H*f:sk:f108ceb, utilization X-HELO: mail-wr1-f42.google.com Received: from mail-wr1-f42.google.com (HELO mail-wr1-f42.google.com) (209.85.221.42) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Fri, 20 Dec 2019 01:32:21 +0000 Received: by mail-wr1-f42.google.com with SMTP id c9so7834900wrw.8 for ; Thu, 19 Dec 2019 17:32:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=xpjinpOUcI/6p6L/OF8+b23M5ArjiPCQ0Q8z/rWCwZY=; b=NlIqPR+/+y894+M2AXez2tXy33wtAk7Dve1heDNu96kK6n15qbhkGRxT33fL9Xm1Wm WxbbeKwbYgYEEHXRbo5bESLFIxhkeKj9s7QKo+aGoYoNzcs9rRXoAh5RtoeNUfa6D2VU 7Gfqe0VXtKDhlTwA97Aoe45DWrUOujfHynKPFnu3tuhDnRc0DMD3xolwwGjukTe8Ywlm rsBy4ErBSDW23PWM+FntkbpWjqx7m6KK15IA2O1KZjXc3iIxzy/I+wx6gEMx+ttBVGaB tRrMb3pIFCjZdHa3svkS1SDZ2bKWUM+jtG1MWM8jwU/i2UX6qxksAuY4kjsQnvIibjRL RLdQ== MIME-Version: 1.0 References: <9A7132EC-5B85-4AF1-A6B9-7DAB0FA11759@ORACLE.COM> <4BDA1047-B75A-49EE-8DBE-0B1C0F646205@ORACLE.COM> In-Reply-To: From: David Edelsohn Date: Fri, 20 Dec 2019 01:32:00 -0000 Message-ID: Subject: Re: Does gcc automatically lower optimization level for very large routines? To: Jeffrey Law Cc: Qing Zhao , Dmitry Mikushin , GCC Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-IsSubscribed: yes X-SW-Source: 2019-12/txt/msg00353.txt.bz2 On Thu, Dec 19, 2019 at 7:41 PM Jeff Law wrote: > > On Thu, 2019-12-19 at 17:06 -0600, Qing Zhao wrote: > > Hi, Dmitry, > > > > Thanks for the responds. > > > > Yes, routine size only cannot determine the complexity of the routine. = Different compiler analysis might have different formula with multiple para= meters to compute its complexity. > > > > However, the common issue is: when the complexity of a specific routine= for a specific compiler analysis exceeds a threshold, the compiler might c= onsume all the available memory and abort the compilation. > > > > Therefore, in order to avoid the failed compilation due to out of memo= ry, some compilers might set a threshold for the complexity of a specific c= ompiler analysis (for example, the more aggressive data flow analysis), whe= n the threshold is met, the specific aggressive analysis will be turned off= for this specific routine. Or the optimization level will be lowered for t= he specific routine (and given a warning during compilation time for such a= djustment). > > > > I am wondering whether GCC has such capability? Or any option provided = to increase or decrease the threshold for some of the common analysis (for = example, data flow)? > > > There are various places where if we hit a limit, then we throttle > optimization. But it's not done consistently or pervasively. > > Those limits are typically around things like CFG complexity. > > We do _not_ try to recover after an out of memory error, or anything > like that. I have mentioned a few times before that IBM XL Compiler allows the user to specify the maximum memory utilization for the compiler (including "unlimmited"). The compiler optimization passes estimate the memory usage for the data structures of each optimization pass. The the memory usage is too high, the pass attempts to sub-divide the region and calculates the estimated memory usage again, recursing until it can apply the optimization within the memory limit or the optimization would not be effective. IBM XL Compiler does not try to recover from an out of memory error, but it explicitly considers memory use of optimization passes. It does not adjust the complexity of the optimization, but it does adjust the size of the region or other parameters to reduce the memory usage of the data structures for an optimization. Thanks, David