From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.kundenserver.de (mout.kundenserver.de [212.227.17.10]) by sourceware.org (Postfix) with ESMTPS id C7254385841C for ; Mon, 30 Aug 2021 15:53:23 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org C7254385841C Authentication-Results: sourceware.org; dmarc=fail (p=none dis=none) header.from=cygwin.com Authentication-Results: sourceware.org; spf=fail smtp.mailfrom=cygwin.com Received: from calimero.vinschen.de ([24.134.7.25]) by mrelayeu.kundenserver.de (mreue106 [212.227.15.183]) with ESMTPSA (Nemesis) id 1MeC5x-1mtVv30pFP-00bNaV for ; Mon, 30 Aug 2021 17:53:22 +0200 Received: by calimero.vinschen.de (Postfix, from userid 500) id 906C8A80DBC; Mon, 30 Aug 2021 17:53:21 +0200 (CEST) Date: Mon, 30 Aug 2021 17:53:21 +0200 From: Corinna Vinschen To: cygwin-developers@cygwin.com Subject: Re: cygrunsrv + sshd + rsync = 20 times too slow -- throttled? Message-ID: Reply-To: cygwin-developers@cygwin.com Mail-Followup-To: cygwin-developers@cygwin.com References: <20210828022111.91ef5b4ff24f6da9fadb489e@nifty.ne.jp> <20210828184102.f2206a8a9e5fe5cf24bf5e45@nifty.ne.jp> <20210829180729.48b4e877f773cb3980c5766d@nifty.ne.jp> <20210830091314.f9a2cb71794d0f68cdb5eba7@nifty.ne.jp> <20210830092259.52f7d54fc3fa340738373af4@nifty.ne.jp> <529d7dd7-d876-ca51-cc1f-e414d3c24f71@cornell.edu> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: X-Provags-ID: V03:K1:P91kkG+JtC0rGlp1zmV0e3B2v54LwYWZpNGjpmYF5RiQgXMTHDe 7UV9ot+7wtmiWy6ueNb+K3MKrcXutmUa0swfTapmGH929aFCIaDZTIEnocNhui+PDkfWDnh V6TuFR60CUMIN9t/Xtuw9WDvRvccyY+fN6Nn7G4LqTg/reHWOgGfIR+Sndc0+USBlXYBbbo 7S4KdFpP76QkUMJGEmrNg== X-UI-Out-Filterresults: notjunk:1;V03:K0:Ckk3K1cJG0o=:lPuQ7ocbNeNJJhnZ3kMQzI /IC3XG0w5tqtKzafKoH7E1WaBsREf4VToVVHGVldc2nrGnPzwjLHKICtvNZFQP/A/T5+bxYHF HhwqFK56TXEaSmnRoVrFe8m95GlPYah/Lh0VZlteN3gNm4zdxOA2qFyryLwV2ksSec6IUPSW3 5Bjnfbwik+X/NEjxR+iQiMXYI8lm840580Am8duhWoTQA8CRD2jO1boQQoLyH2uVjM5zrZ/Sx VWyszgNeLLyPYy5rWCC3uTXdNKVxwFaOxkx25EdSBQzgzl/rCMaMh6cOnVLLVAUdqSGPE2Boz 78NRbRXZUtB6BtAcOLFdA0m0d2Kgd606z+GpZcn30Qgg/Hz5OyLnKdWT+PbQU+2PE59hCvknB cqrfsoBt43IiljPteZHNn56eVh+ir1/VHQyAKzHaNws92K8xwPTUry6kjNYFG/BBIneR63E8c vboHSNg2VXEuT5E1txlLbJv4VZtax3asQUQYA3sEY7S6bUIJSrnfNbAiCLI2p22IIaZ/uaGAq WlR8EVsQlRLW6rVX0Ff0oRSuWXQ7Vf5KifsOSKOk4woBnXXAgmF8m7Qcx/0miJTJuxhySr/lV Q7Em9e595LQUkZX6SjazHvK3XFc3OWJfIWg+QKNm2rxhFRO2f6jeNOQFjg5DQc9xJhTdEiqkJ Nri3+IgZzePTmMAgC0hC4d0QLftGRPcG7rDZ4BK/nBMcUHTh26qth1XRsd7oclOyzUB8vLydy xyuNBItFxG+Jo/nkMFRvNvcx1I2RPGomQYVLQLc/9u8w19Tx9tFlVdw2Awk= X-Spam-Status: No, score=-100.0 required=5.0 tests=BAYES_00, GOOD_FROM_CORINNA_CYGWIN, KAM_DMARC_NONE, KAM_DMARC_STATUS, RCVD_IN_MSPIKE_H2, SPF_HELO_NONE, SPF_NEUTRAL, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: cygwin-developers@cygwin.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Cygwin core component developers mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 30 Aug 2021 15:53:25 -0000 On Aug 30 16:05, Corinna Vinschen wrote: > On Aug 30 09:36, Ken Brown wrote: > > BTW, when I was working on the pipe approach to AF_UNIX sockets > > (topic/af_unix branch), I had occasion to step through > > select.cc:pipe_data_available in gdb, and the use of fpli.OutboundQuota - > > fpli.ReadDataAvailable definitely seemed wrong to me. So when I wrote > > peek_socket_unix on that branch, I used fpli.WriteQuotaAvailable, as Takashi > > is suggesting now. > > If that's working reliable these days (keeping fingers crossed for W7), > it's ok if we use that. We may want to check if the above observation > in terms on WriteQuotaAvailable on a pipe with pending read is still an > issue. Ok, I wrote a small testcase. It creates a named pipe, reads from the pipe, then, later, writes to the pipe. Interlaced with these calls, it calls NtQueryInformationFile(FilePipeLocalInformation) on the write side of the pipe. Kind of like this: CreatePipe NtQueryInformationFile ReadFile NtQueryInformationFile WriteFile NtQueryInformationFile Here's the result: Before ReadFile: InboundQuota: 65536 ReadDataAvailable: 0 OutboundQuota: 65536 WriteQuotaAvailable: 65536 While ReadFile is running: InboundQuota: 65536 ReadDataAvailable: 0 OutboundQuota: 65536 WriteQuotaAvailable: 65494 !!! After WriteFile and ReadFile succeeded: InboundQuota: 65536 ReadDataAvailable: 0 OutboundQuota: 65536 WriteQuotaAvailable: 65536 That means, while a reader on the reader side is waiting for data, the WriteQuotaAvailable on the write side is decremented by the amount of data requested by the reader (42 bytes in my case), just as outlined in that mail from 2004. And this is on W10 now. What to do with this information? TBD. Side note: My testcase is starting a second thread to call ReadFile. For that reason I was using synchronous IO on the pipe since, well, never mind if that thread is blocked in ReadFile, right? Nothing keeps us from calling NtQueryInformationFile on the write side of the pipe, right? Wrong. While the second thread was blocked in ReadFile, the call to NtQueryInformationFile was blocking, too :-P I had to convert the read side of the pipe to asynchronous mode to be able to call NtQueryInformationFile(FilePipeLocalInformation) on the write side of the pipe, while the read side is performing a ReadFile operation. Corinna