From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by sourceware.org (Postfix) with ESMTPS id 922113858D32 for ; Tue, 26 Jul 2022 11:04:30 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 922113858D32 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 216866122C; Tue, 26 Jul 2022 11:04:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id C89B8C341C0; Tue, 26 Jul 2022 11:04:28 +0000 (UTC) Received: by mail.zx2c4.com (ZX2C4 Mail Server) with ESMTPSA id fbf47f6c (TLSv1.3:AEAD-AES256-GCM-SHA384:256:NO); Tue, 26 Jul 2022 11:04:27 +0000 (UTC) Date: Tue, 26 Jul 2022 13:04:25 +0200 From: "Jason A. Donenfeld" To: Florian Weimer Cc: libc-alpha@sourceware.org, Adhemerval Zanella Netto , Cristian =?utf-8?Q?Rodr=C3=ADguez?= , Paul Eggert , linux-crypto@vger.kernel.org Subject: Re: [PATCH v2] arc4random: simplify design for better safety Message-ID: References: <20220725225728.824128-1-Jason@zx2c4.com> <20220725232810.843433-1-Jason@zx2c4.com> <87k080i4fo.fsf@oldenburg.str.redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <87k080i4fo.fsf@oldenburg.str.redhat.com> X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 26 Jul 2022 11:04:32 -0000 Hi Florian, On Tue, Jul 26, 2022 at 11:55:23AM +0200, Florian Weimer wrote: > * Jason A. Donenfeld: > > > + pfd.fd = TEMP_FAILURE_RETRY ( > > + __open64_nocancel ("/dev/random", O_RDONLY | O_CLOEXEC | O_NOCTTY)); > > + if (pfd.fd < 0) > > + arc4random_getrandom_failure (); > > + if (__poll (&pfd, 1, -1) < 0) > > + arc4random_getrandom_failure (); > > + if (__close_nocancel (pfd.fd) < 0) > > + arc4random_getrandom_failure (); > > What happens if /dev/random is actually /dev/urandom? Will the poll > call fail? Yes. I'm unsure if you're asking this because it'd be a nice simplification to only have to open one fd, or because you're worried about confusion. I don't think the confusion problem is one we should take too seriously, but if you're concerned, we can always fstat and check the maj/min. Seems a bit much, though. > I think we need a no-cancel variant of poll here, and we also need to > handle EINTR gracefully. Thanks for the note about poll nocancel. I'll try to add this. I don't totally know how to manage that pluming, but I'll give it my best shot. > Performance-wise, my 1000 element shuffle benchmark runs about 14 times > slower without userspace buffering. (For comparison, just removing > ChaCha20 while keeping a 256-byte buffer makes it run roughly 25% slower > than current master.) Our random() implementation is quite slow, so > arc4random() as a replacement call is competitive. The unbuffered > version, not so much. Yes, as mentioned, this is slower. But let's get something down first that's *correct*, and then after we can start optimizing it. Let's not prematurely optimize and create a problematic function that nobody should use. > Running the benchmark, I see 40% of the time spent in chacha_permute in > the kernel, that is really quite odd. Why doesn't the system call > overhead dominate? Huh, that is interesting. I guess if you're reading 4 bytes for an integer, it winds up computing a whole chacha block each time, with half of it doing fast key erasure and half of it being returnable to the caller. When we later figure out a safer way to buffer, ostensibly this will go away. But for now, we really should not prematurely optimize. I'll have v3 out shortly with your suggested fixes. Jason