From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.kundenserver.de (mout.kundenserver.de [212.227.126.135]) by sourceware.org (Postfix) with ESMTPS id C938B385800E for ; Mon, 30 Aug 2021 17:00:47 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org C938B385800E Authentication-Results: sourceware.org; dmarc=fail (p=none dis=none) header.from=cygwin.com Authentication-Results: sourceware.org; spf=fail smtp.mailfrom=cygwin.com Received: from calimero.vinschen.de ([24.134.7.25]) by mrelayeu.kundenserver.de (mreue009 [212.227.15.167]) with ESMTPSA (Nemesis) id 1Mn2eN-1mkUTh20e7-00kAmz for ; Mon, 30 Aug 2021 19:00:46 +0200 Received: by calimero.vinschen.de (Postfix, from userid 500) id 95844A80D72; Mon, 30 Aug 2021 19:00:45 +0200 (CEST) Date: Mon, 30 Aug 2021 19:00:45 +0200 From: Corinna Vinschen To: cygwin-developers@cygwin.com Subject: Re: cygrunsrv + sshd + rsync = 20 times too slow -- throttled? Message-ID: Reply-To: cygwin-developers@cygwin.com Mail-Followup-To: cygwin-developers@cygwin.com References: <20210828184102.f2206a8a9e5fe5cf24bf5e45@nifty.ne.jp> <20210829180729.48b4e877f773cb3980c5766d@nifty.ne.jp> <20210830091314.f9a2cb71794d0f68cdb5eba7@nifty.ne.jp> <20210830092259.52f7d54fc3fa340738373af4@nifty.ne.jp> <529d7dd7-d876-ca51-cc1f-e414d3c24f71@cornell.edu> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: X-Provags-ID: V03:K1:bDeipW8qtFRNmYyjefOMf7quEHkWz4BzdpqkGE5aWqRKzgvs+GK UouG+TeoZlfw7khRMUTzPv4XCVfM3litF45xOolZgtI9DFvFuzfz87mdQDHEHULbG0TXw6/ gJubWoe0dypReppMprdBK6QIHxr2tkjl62dQJlJ7FSkQ6lLekMT9paZX6q9+JxloqQPzJH6 x9AnzWjOZzIUWzAwN1AzA== X-UI-Out-Filterresults: notjunk:1;V03:K0:enj9hE+Bvbo=:LYH1Jg5WT7KzSseuA6p2tE Ux18if92H7tm5t4tnkK0ZDxnqaSII8cxJULHwO2aiY9qYv+SifMyZAl/wgWT83Seji3wWto+c xbcXy5Vv+CUWQKaylWS1b4wSFDvaWWA8wHARKGPoUo+NsAs84C7rHl3fO/DD6S21QZ6NosOMT mzqlSa6eGbw7F/hfMIz7DQIYAZU5kie2iwBgAQIDmA9a4VCnoqERUyEajbymSDjSuM2SVV/7e r1aQosBcAhw8gn1JihgJqKwWazALP+XL3J5VIbIaJh3USiteNbcVSd3UKGjUQOAajz8eNa2PZ YPPudBdzxhJxifIJh3W9ayBxZuYsKB8eXNSdaa8CZayiAvlX3DjI9eMslgEzi3HKTSTc9DjbK swJtIIUfTuRnj+vGg+sQn3u/2vmEVJ7ZAHhiLsn2VqRTn7+7IjuUzh/nC3WzVqrtVCRKUNHB4 d3EKSuXSp0uaZDL6cECvGnO+efZXZd2jwiOXGOccWM0LVmVbxRb9dbVkAtFhGw7qMIj5Rooqr u31BMA3sZjD7HpW/wAPsephIeEfRDTjgvD9BLMRhA0FgdWteED3ziRhZ7CXWjlUF/VOoDNcPf yY0ShuNQt7sWUM/ki9M6DMMWB1liAF1y79impLUqXJgYTvCBu0QDWQDE0qnJdDV52kmSkWDxz gFf4Iibl3mP/Ct0b2R/FAXtj7GTMupsyxYbuRNRewQRMHUGd+aoRwAUVaY8HjULx7BsHkKZEh UeXWa+i5+GfFnQKSHV1OnivssiS3+rCaFYLxYljYnZGqN5yp8Tz9c/9FxDI= X-Spam-Status: No, score=-100.0 required=5.0 tests=BAYES_00, GOOD_FROM_CORINNA_CYGWIN, KAM_DMARC_NONE, KAM_DMARC_STATUS, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_NEUTRAL, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: cygwin-developers@cygwin.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Cygwin core component developers mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 30 Aug 2021 17:00:49 -0000 On Aug 30 17:53, Corinna Vinschen wrote: > On Aug 30 16:05, Corinna Vinschen wrote: > > On Aug 30 09:36, Ken Brown wrote: > > > BTW, when I was working on the pipe approach to AF_UNIX sockets > > > (topic/af_unix branch), I had occasion to step through > > > select.cc:pipe_data_available in gdb, and the use of fpli.OutboundQuota - > > > fpli.ReadDataAvailable definitely seemed wrong to me. So when I wrote > > > peek_socket_unix on that branch, I used fpli.WriteQuotaAvailable, as Takashi > > > is suggesting now. > > > > If that's working reliable these days (keeping fingers crossed for W7), > > it's ok if we use that. We may want to check if the above observation > > in terms on WriteQuotaAvailable on a pipe with pending read is still an > > issue. > > Ok, I wrote a small testcase. It creates a named pipe, reads from the > pipe, then, later, writes to the pipe. Interlaced with these calls, it > calls NtQueryInformationFile(FilePipeLocalInformation) on the write side > of the pipe. Kind of like this: > > CreatePipe > NtQueryInformationFile > ReadFile > NtQueryInformationFile > WriteFile > NtQueryInformationFile > > Here's the result: > > Before ReadFile: > > InboundQuota: 65536 > ReadDataAvailable: 0 > OutboundQuota: 65536 > WriteQuotaAvailable: 65536 > > While ReadFile is running: > > InboundQuota: 65536 > ReadDataAvailable: 0 > OutboundQuota: 65536 > WriteQuotaAvailable: 65494 !!! > > After WriteFile and ReadFile succeeded: > > InboundQuota: 65536 > ReadDataAvailable: 0 > OutboundQuota: 65536 > WriteQuotaAvailable: 65536 > > That means, while a reader on the reader side is waiting for data, the > WriteQuotaAvailable on the write side is decremented by the amount of > data requested by the reader (42 bytes in my case), just as outlined in that > mail from 2004. And this is on W10 now. > > What to do with this information? TBD. Ok, let's discuss this. I added more code to my testcase and here's what I see. I dropped all data from the output which doesn't change. What I'm trying to get a grip on are the dependencies here. After creating the pipe: read side: ReadDataAvailable: 0 write side: WriteQuotaAvailable: 65536 After writing 20 bytes... read side: ReadDataAvailable: 20 write side: WriteQuotaAvailable: 65516 After writing 40 more bytes... read side: ReadDataAvailable: 60 write side: WriteQuotaAvailable: 65476 After reading 42 bytes... read side: ReadDataAvailable: 18 write side: WriteQuotaAvailable: 65518 After writing 20 bytes... read side: ReadDataAvailable: 38 write side: WriteQuotaAvailable: 65498 *While* reading 42 bytes with an empty buffer... read side: ReadDataAvailable: 0 write side: WriteQuotaAvailable: 65494 Another important fun fact: Assuming the read and write buffer sizes are differently specified. I called CreateNamedPipe with an outbuffer size of 32K and an inbuffer size of 64K: After creating the pipe: read side: InboundQuota: 65536 ReadDataAvailable: 0 OutboundQuota: 32768 WriteQuotaAvailable: 32768 write side: InboundQuota: 65536 ReadDataAvailable: 0 OutboundQuota: 32768 WriteQuotaAvailable: 65536 !!! This last data point shows that: - InboundQuota and OutboundQuota are always constant values and do not depend on the side the information has been queried on. That certainly makes sense. - WriteQuotaAvailable does not depend on the OutboundQuota, but on the InboundQuota, and very likely on the InboundQuota of the read side. The OutboundQuota *probably* only makes sense when using named pipes with remote clients, which we never do anyway. The preceeding output shows that ReadDataAvailable on the read side and WriteQuotaAvailable on the write side are connected. If we write 20 bytes, ReadDataAvailable is incremented by 20 and WriteQuotaAvailable is decremented by 20. So: write.WriteQuotaAvailable == InboundQuota - read.ReadDataAvailable. Except when a ReadFile is pending on the read side. It's as if the running ReadFile already reserved write quota. So the write side WriteQuotaAvailable is the number of bytes we can write without blocking, after all pending ReadFiles have been satisfied. Unfortunately that doesn't really make sense when looked at it from the user space. What that means in the first place is that WriteQuotaAvailable on the write side is unreliable. What we really need is InboundQuota - read.ReadDataAvailable. The problem with that is that the write side usually has no access to the read side of the pipe. Long story short, I have no idea how to fix that ATM. Corinna