From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id B3BC83858D37; Mon, 9 Jan 2023 18:14:52 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org B3BC83858D37 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1673288092; bh=aWvABlOqCItcF97TLvwIiF/qEGlDIl0fOL0eUmanBUU=; h=From:To:Subject:Date:From; b=b+XPkrRwGQdg+tcp7flYvsLifMXDeAkw4CxrmzZEyX4wd20O/fOEm+QWH8hjCROEM +aTSokVEjv7AhZaBDS2VV//FJbqv2UjYdVzIkatolv0s0INhCreczQ38pkKFV9ESQQ B+wSDxBG4dpBdWW/2xe7/0a2I+NM3ph9k9fpniB0= From: "ross at burtonini dot com" To: elfutils-devel@sourceware.org Subject: [Bug debuginfod/29976] New: webapi connection pool eats all file handles Date: Mon, 09 Jan 2023 18:14:52 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: elfutils X-Bugzilla-Component: debuginfod X-Bugzilla-Version: unspecified X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: ross at burtonini dot com X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: P2 X-Bugzilla-Assigned-To: unassigned at sourceware dot org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version bug_status bug_severity priority component assigned_to reporter cc target_milestone Message-ID: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://sourceware.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://sourceware.org/bugzilla/show_bug.cgi?id=3D29976 Bug ID: 29976 Summary: webapi connection pool eats all file handles Product: elfutils Version: unspecified Status: UNCONFIRMED Severity: normal Priority: P2 Component: debuginfod Assignee: unassigned at sourceware dot org Reporter: ross at burtonini dot com CC: elfutils-devel at sourceware dot org Target Milestone: --- If I start debuginfod without any concurrency limits: [Mon Jan 9 17:40:14 2023] (2356243/2356243): libmicrohttpd error: Failed to create worker inter-thread communication channel: Too many open files My machine has 256 cores, and stracing debuginfod shows that it fails to op= en more files after creating 510 epoll fds (twice): epoll_create1(EPOLL_CLOEXEC) =3D 1021 epoll_ctl(1021, EPOLL_CTL_ADD, 3, {events=3DEPOLLIN, data=3D{u32=3D40270136= 64, u64=3D187651148175904}}) =3D 0 epoll_ctl(1021, EPOLL_CTL_ADD, 1020, {events=3DEPOLLIN, data=3D{u32=3D29659= 61632, u64=3D281473647704992}}) =3D 0 mmap(NULL, 8454144, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0) = =3D 0xfff6b97b0000 mprotect(0xfff6b97c0000, 8388608, PROT_READ|PROT_WRITE) =3D 0 rt_sigprocmask(SIG_BLOCK, ~[], [], 8) =3D 0 clone(child_stack=3D0xfff6b9fbea00, flags=3DCLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSV= SEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tid=3D[2361982], tls=3D0xfff6b9fbf880, child_tidptr=3D0xfff6b9fbf210= ) =3D 2361982 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) =3D 0 eventfd2(0, EFD_CLOEXEC|EFD_NONBLOCK) =3D 1022 epoll_create1(EPOLL_CLOEXEC) =3D 1023 epoll_ctl(1023, EPOLL_CTL_ADD, 3, {events=3DEPOLLIN, data=3D{u32=3D40270144= 56, u64=3D187651148176696}}) =3D 0 epoll_ctl(1023, EPOLL_CTL_ADD, 1022, {events=3DEPOLLIN, data=3D{u32=3D29659= 61632, u64=3D281473647704992}}) =3D 0 mmap(NULL, 8454144, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0) = =3D 0xfff6b8fa0000 mprotect(0xfff6b8fb0000, 8388608, PROT_READ|PROT_WRITE) =3D 0 rt_sigprocmask(SIG_BLOCK, ~[], [], 8) =3D 0 clone(child_stack=3D0xfff6b97aea00, flags=3DCLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSV= SEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tid=3D[2361983], tls=3D0xfff6b97af880, child_tidptr=3D0xfff6b97af210= ) =3D 2361983 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) =3D 0 eventfd2(0, EFD_CLOEXEC|EFD_NONBLOCK) =3D -1 EMFILE (Too many open files) ulimit -n is 1024, do I really need more just to start debuginfod if I have= 256 cores? As the web connections is 2xthreads and it appears to be using two = fds per connection, maybe I do. Should the connection pool have a hard limit when using the default? I doubt 512 incoming connections would be usual, and if that is needed then the user can specify -C. --=20 You are receiving this mail because: You are on the CC list for the bug.=