From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.kundenserver.de (mout.kundenserver.de [217.72.192.75]) by sourceware.org (Postfix) with ESMTPS id 74B6E3858001 for ; Tue, 7 Sep 2021 18:26:54 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 74B6E3858001 Authentication-Results: sourceware.org; dmarc=fail (p=none dis=none) header.from=cygwin.com Authentication-Results: sourceware.org; spf=fail smtp.mailfrom=cygwin.com Received: from calimero.vinschen.de ([24.134.7.25]) by mrelayeu.kundenserver.de (mreue107 [212.227.15.183]) with ESMTPSA (Nemesis) id 1MMFdY-1mdgUM2UFy-00JIUW for ; Tue, 07 Sep 2021 20:26:52 +0200 Received: by calimero.vinschen.de (Postfix, from userid 500) id 9B1E0A80D89; Tue, 7 Sep 2021 20:26:51 +0200 (CEST) Date: Tue, 7 Sep 2021 20:26:51 +0200 From: Corinna Vinschen To: cygwin-developers@cygwin.com Subject: Re: cygrunsrv + sshd + rsync = 20 times too slow -- throttled? Message-ID: Reply-To: cygwin-developers@cygwin.com Mail-Followup-To: cygwin-developers@cygwin.com References: <51cb0cef-c3fd-1320-c2dd-a868bf1ffaae@cornell.edu> <20210905081523.0db04d9402abf87635066eb7@nifty.ne.jp> <20210905224059.cfdc8f23d3eeaa1ea16ecf2e@nifty.ne.jp> <20210905225037.c625ee0bcd479181848763f8@nifty.ne.jp> <20210906050950.56b397be7c5eb3da848691e9@nifty.ne.jp> <20210906201643.2e84c0d3a7ac7c8878548408@nifty.ne.jp> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: X-Provags-ID: V03:K1:lmklYrGeddb3ctzQxiOinAEMk+xMuVyGFhSDPKpk3Ds4HoPatsI EnHd1B4LDpiwl0Nwu4TrU1e37DKKxA4uPK82DqirJyGb0n8VCRnz0h95xAjpLLhOhP54nj7 dtu0Ar8MkvihD9plKYidGy+0u4LdMSfg/V3oNsd4xodr7CI0mWCRRPhpSeFTA4zkEiwMQ/6 BJ2MKuUOy80V7tc6pwhnw== X-UI-Out-Filterresults: notjunk:1;V03:K0:LnqMy5+HRnQ=:xMMDZ2ubSVMzyJtrgPn5pw ynM3afDmhRNbl0+ZjhTj7ygQfdAJ0E28pQ24ROUfkkb5J9qI5SzD/7k8zAn+Ks0/sTDoTEyWa hY+TxtoQsrf4wMA5Zrrrhm3Wh1scVvYFQesAghQs50UshMOAsixICKC/J7IuFFskbsjWJVOi4 RT82yGEmvRvFXDRqdUdOdDpK628u9xIVYrAog1hyXqYlcgVCzSUkPGnco65q/uKS5XoS2mBDK Efz9g2RjGxKaxBwyaeA/vXehYSEUu1cAZQhosiyx2MN74swAmhjadDpgEji3vyjPyLWWvVhUp 2trlOBRvTiEySqZsEBtD44jzAJ4I9mfosDk6wRqTfMP8bPW2/EVUpwd4lQpE4SJ+SXN7ahk/c LXdGi9XW822XiREynuhRRJJg2mclgYh4Z3ODiFBcTnyDaXVAEimioKsGLDaUKrmgzJc+rzPxg x8z+wi/qZRI/escDRI4/Ej4wPjDoLTb/of6N5vvNpLwZ1Dua0BVRqkIJVMcSjVkVEJ74b3k3D NqbfrXcXoCfEaIibAj+Ur40i70dIRRxMQvb5uoqLm8CopsTjxAX3wGClf/2hV0b6K5sediTnT vso5WnWTUtNM5o07cXQF2h2t4iGZfM6tjDzjUw/MJ0PlbHrslHS1jCYgP29qI8wDbej3hP2fR TxEgzSjjbwIk4ikbgbHyIKcSHTncbzTC3VoN/YQhcjBmH4BVprpjVjsMhRnGvwLsQRHWoXYAw mmXniXTbOwbtLDBuVoufuCzCtsT5IIp6X8bhBW0A487I5GqOgjutOVAtIIQ= X-Spam-Status: No, score=-99.7 required=5.0 tests=BAYES_00, GOOD_FROM_CORINNA_CYGWIN, JMQ_SPF_NEUTRAL, KAM_DMARC_NONE, KAM_DMARC_STATUS, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_NONE, SPF_NEUTRAL, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: cygwin-developers@cygwin.com X-Mailman-Version: 2.1.29 Precedence: list List-Id: Cygwin core component developers mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 07 Sep 2021 18:26:56 -0000 On Sep 7 12:14, Ken Brown wrote: > On 9/6/2021 8:49 AM, Corinna Vinschen wrote: > > - I think setting chunk to DEFAULT_PIPEBUFSIZE - 1 in the read case and > > DEFAULT_PIPEBUFSIZE in the write case by default is dangerous. > > Assuming the pipe has been created by a non-Cygwin process, the values > > may be way too high. > > > > Suggestion: Actually set max_atomic_write to something useful. > > Set max_atomic_write to DEFAULT_PIPEBUFSIZE in fhandler_pipe::create. > > In case of stdio handles inherited from non-Cygwin processes, fetch > > the pipe buffer size via NtQueryInformationFile in > > dtable::init_std_file_from_handle(). Better, in a matching > > fhandler_pipe method called from init_std_file_from_handle(). > > How about something like the attached (untested)? LGTM. I like the name change! > > - What about calling select for writing on pipes read by non-Cygwin > > processes? In that case, we still can't rely on WriteQuotaAvailable, > > just as before. > > > > I have a vague idea that we might want to count readers in that case, > > but I have to think about it some more. > > Even if we count readers, we have no way of knowing whether a pending read > has reduced WriteQuotaAvailable to 0. Maybe this is a case where we should > impose some artificial timeout, after which we report write ready. Falsely > reporting write ready in this corner case seems better than risking a > deadlock. Yeah, it's an almost hopeless case. A timeout may be a way out. Corinna