From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from re-prd-fep-044.btinternet.com (mailomta10-re.btinternet.com [213.120.69.103]) by sourceware.org (Postfix) with ESMTPS id 4E8253856DD2 for ; Mon, 16 May 2022 16:49:13 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 4E8253856DD2 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=dronecode.org.uk Authentication-Results: sourceware.org; spf=none smtp.mailfrom=dronecode.org.uk Received: from re-prd-rgout-002.btmx-prd.synchronoss.net ([10.2.54.5]) by re-prd-fep-044.btinternet.com with ESMTP id <20220516164912.SCBW3224.re-prd-fep-044.btinternet.com@re-prd-rgout-002.btmx-prd.synchronoss.net>; Mon, 16 May 2022 17:49:12 +0100 Authentication-Results: btinternet.com; auth=pass (PLAIN) smtp.auth=jonturney@btinternet.com; bimi=skipped X-SNCR-Rigid: 613A8DE8239908B5 X-Originating-IP: [86.139.167.41] X-OWM-Source-IP: 86.139.167.41 (GB) X-OWM-Env-Sender: jonturney@btinternet.com X-VadeSecure-score: verdict=clean score=0/300, class=clean X-RazorGate-Vade: gggruggvucftvghtrhhoucdtuddrgedvfedrheehgddutdduucetufdoteggodetrfdotffvucfrrhhofhhilhgvmecuueftkffvkffujffvgffngfevqffopdfqfgfvnecuuegrihhlohhuthemuceftddunecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjughrpefkffggfgfuvfhfhfgjtgfgsehtkeertddtfeejnecuhfhrohhmpeflohhnucfvuhhrnhgvhicuoehjohhnrdhtuhhrnhgvhiesughrohhnvggtohguvgdrohhrghdruhhkqeenucggtffrrghtthgvrhhnpeffiefhueeuleeuudeludeugfehtedtjeeiueelieeiudelhfejkeefhfeggeduueenucffohhmrghinhepsghrvghnuggrnhhgrhgvghhgrdgtohhmnecukfhppeekiedrudefledrudeijedrgedunecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehhvghloheplgduledvrdduieekrddurddutdehngdpihhnvghtpeekiedrudefledrudeijedrgedupdhmrghilhhfrhhomhepjhhonhdrthhurhhnvgihsegurhhonhgvtghouggvrdhorhhgrdhukhdpnhgspghrtghpthhtohepvddprhgtphhtthhopegthihgfihinhdquggvvhgvlhhophgvrhhssegthihgfihinhdrtghomhdprhgtphhtthhopehmrghrkhesmhgrgihrnhgurdgtohhm X-RazorGate-Vade-Verdict: clean 0 X-RazorGate-Vade-Classification: clean Received: from [192.168.1.105] (86.139.167.41) by re-prd-rgout-002.btmx-prd.synchronoss.net (5.8.716.04) (authenticated as jonturney@btinternet.com) id 613A8DE8239908B5; Mon, 16 May 2022 17:49:12 +0100 Message-ID: Date: Mon, 16 May 2022 17:49:11 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101 Thunderbird/91.9.0 Subject: Re: load average calculation imperfections Content-Language: en-GB To: cygwin-developers@cygwin.com, Mark Geisert References: <3a3edd10-2617-0919-4eb0-7ca965b48963@maxrnd.com> <223aa826-7bf9-281a-aed8-e16349de5b96@dronecode.org.uk> <53664601-5858-ffd5-f854-a5c10fc25613@maxrnd.com> <670cea06-e202-3c90-e567-b78d737f5156@dronecode.org.uk> <2c7d326d-3de0-9787-897f-54c62bf3bbcc@maxrnd.com> <5dbeb18a-92ef-4b6a-64eb-8fe1f60887fc@maxrnd.com> From: Jon Turney In-Reply-To: <5dbeb18a-92ef-4b6a-64eb-8fe1f60887fc@maxrnd.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-1197.6 required=5.0 tests=BAYES_00, BODY_8BITS, FORGED_SPF_HELO, GIT_PATCH_0, KAM_DMARC_STATUS, KAM_LAZY_DOMAIN_SECURITY, NICE_REPLY_A, RCVD_IN_DNSWL_NONE, SPF_HELO_PASS, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: cygwin-developers@cygwin.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Cygwin core component developers mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 16 May 2022 16:49:15 -0000 On 16/05/2022 06:25, Mark Geisert wrote: > Corinna Vinschen wrote: >> On May 13 13:04, Corinna Vinschen wrote: >>> On May 13 11:34, Jon Turney wrote: >>>> On 12/05/2022 10:48, Corinna Vinschen wrote: >>>>> On May 11 16:40, Mark Geisert wrote: >>>>>> >>>>>> The first counter read now gets error 0xC0000BC6 == >>>>>> PDH_INVALID_DATA, but no >>>>>> errors on subsequent counter reads.  This sounds like it now >>>>>> matches what >>>>>> Corinna reported for W11.  I wonder if she's running build 1706 >>>>>> already. >>>>> >>>>> Erm... looks like I didn't read your mail throughly enough. >>>>> >>>>> This behaviour, the first call returning with PDH_INVALID_DATA and >>>>> only >>>>> subsequent calls returning valid(?) values, is what breaks the >>>>> getloadavg function and, consequentially, /proc/loadavg.  So maybe >>>>> xload >>>>> now works, but Cygwin is still broken. >>>> >>>> The first attempt to read '% Processor Time' is expected to fail with >>>> PDH_INVALID_DATA, since it doesn't have a value at a particular >>>> instant, but >>>> one averaged over a period of time. >>>> >>>> This is what the following comment is meant to record: >>>> >>>> "Note that PDH will only return data for '% Processor Time' after >>>> the second >>>> call to PdhCollectQueryData(), as it's computed over an interval, so >>>> the >>>> first attempt to estimate load will fail and 0.0 will be returned." >>> >>> But. >>> >>> Every invocation of getloadavg() returns 0.  Even under load.  Calling >>> `cat /proc/loadavg' is an excercise in futility. >>> >>> The only way to make getloadavg() work is to call it in a loop from the >>> same process with a 1 sec pause between invocations.  In that case, even >>> a parallel `cat /proc/loadavg' shows the same load values. >>> >>> However, as soon as I stop the looping process, the /proc/loadavg values >>> are frozen in the last state they had when stopping that process. >> >> Oh, and, stopping and restarting all Cygwin processes in the session will >> reset the loadavg to 0. >> >>> Any suggestions how to fix this? > > I'm getting somewhat better behavior from repeated 'cat /proc/loadavg' > with the following update to Cygwin's loadavg.cc: > > diff --git a/winsup/cygwin/loadavg.cc b/winsup/cygwin/loadavg.cc > index 127591a2e..cceb3e9fe 100644 > --- a/winsup/cygwin/loadavg.cc > +++ b/winsup/cygwin/loadavg.cc > @@ -87,6 +87,9 @@ static bool load_init (void) >      } > >      initialized = true; > + > +    /* prime the data pump, hopefully */ > +    (void) PdhCollectQueryData (query); >    } Yeah, something like this might be a good idea, as at the moment we report load averages of 0 for the 5 seconds after the first time someone asks for it. It's not ideal, because with this change, we go on to call PdhCollectQueryData() again very shortly afterwards, so the first value for '% Processor Time' is measured over a very short interval, and so may be very inaccurate. >    return initialized; > > It's only somewhat better because it seems like multiple updaters of the > load average act sort of independently.  It's hard to characterize what > I'm seeing but let me try. > > First let me shove xload aside by saying it shows instantaneous load and > is thus a different animal.  It only cares about total %processor time, > so its load average value never goes higher than ncpus, nor does it have > any decay behavior built-in. > > Any other Cygwin app I know of is using getloadavg() under the hood. > When it calculates a new set of 1,5,15 minute load averages, it uses > total %processor time and total processor queue length.  It has a decay > behavior that I think has been around since early Unix.  What I haven't > noticed before is an "inverse" decay behavior that seems wrong to me, > but maybe Linux has this.  That is, if you have just one compute-bound > process the load average won't reach 1.0 until that process has been > running for a full minute.  You don't see instantaneous load. In fact it asymptotically approaches 1, so it wouldn't each it until you've had a load of 1 for a long time compared to the time you are averaging over. Starting from idle, a unit load after 1 minute would result in an 1-minute load average of (1 - (1/e)) = ~0.62. See https://www.brendangregg.com/blog/2017-08-08/linux-load-averages.html for some discussion of that. That's just how it works, as a measure of demand, not load. > I guess that's all reasonable so far.  But I think the wrinkle Cygwin is > adding, allowing the load average to be calculated by multiple updaters, > makes it seem like updaters are not keeping in sync with each other > despite the loadavginfo shared data.  I can't quite wrap my head around > the current implementation to prove or disprove its correctness. > > Ideally, the shared data should have the most recently calculated 1,5,15 > minute load averages and a timestamp of when they were calculated.  And > then any process that calls getloadavg() should independently decide > whether it's time to calculate an updated set of values for machine-wide > use.  But can the decay calculations get messed up due to multiple > updaters?  I want to say no, but I can't quite convince myself.  Each > updater has its own idea of the 1,5,15 timespans, doesn't it, because > updates can occur at random, rather than at a set period like a kernel > would do? I think not, because last_time is part of the shared loadavginfo state, which is the unix epoch time that the last update was computed, and updating that is guarded by a mutex. That's not to say that this code might not be wrong in some other way :)