From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from gnu.wildebeest.org (wildebeest.demon.nl [212.238.236.112]) by sourceware.org (Postfix) with ESMTPS id 5C8EB393F84A; Fri, 14 May 2021 13:16:37 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 5C8EB393F84A Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=klomp.org Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=mark@klomp.org Received: from librem (deer0x15.wildebeest.org [172.31.17.151]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by gnu.wildebeest.org (Postfix) with ESMTPSA id 486B6302BBED; Fri, 14 May 2021 15:16:36 +0200 (CEST) Received: by librem (Postfix, from userid 1000) id C5CC2C3566; Fri, 14 May 2021 15:15:12 +0200 (CEST) Date: Fri, 14 May 2021 15:15:12 +0200 From: Mark Wielaard To: fche at redhat dot com Cc: elfutils-devel@sourceware.org Subject: Re: [Bug debuginfod/27859] New: reused debuginfod_client objects don't clean out curl handles enough Message-ID: <20210514131512.GA2697@wildebeest.org> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) X-Spam-Status: No, score=-4.9 required=5.0 tests=BAYES_00, JMQ_SPF_NEUTRAL, KAM_DMARC_STATUS, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=no autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: elfutils-devel@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Elfutils-devel mailing list List-Unsubscribe: , List-Archive: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 14 May 2021 13:16:40 -0000 On Thu, May 13, 2021 at 01:26:42AM +0000, fche at redhat dot com via Elfutils-devel wrote: > https://sourceware.org/bugzilla/show_bug.cgi?id=27859 > > In a sequence of queries on the same debuginfod_client, as long as > they are all successful, things are fine. Once there is a 404 error > however, this appears to latch, and subsequent requests give 404 > whether or not they were resolvable by upstream. Makes sense that curl remembers 404 results. Does that mean we need to refresh the curl handle when a request is made for a negative cached entry and cache_miss_s expires?