From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 11523 invoked by alias); 14 Nov 2013 17:08:53 -0000 Mailing-List: contact glibc-bugs-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Post: List-Help: , Sender: glibc-bugs-owner@sourceware.org Received: (qmail 11478 invoked by uid 55); 14 Nov 2013 17:08:48 -0000 From: "bugdal at aerifal dot cx" To: glibc-bugs@sourceware.org Subject: [Bug malloc/16159] malloc_printerr() deadlock, when calling malloc_printerr() again Date: Thu, 14 Nov 2013 17:08:00 -0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: glibc X-Bugzilla-Component: malloc X-Bugzilla-Version: 2.12 X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: bugdal at aerifal dot cx X-Bugzilla-Status: SUSPENDED X-Bugzilla-Priority: P2 X-Bugzilla-Assigned-To: unassigned at sourceware dot org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: http://sourceware.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-SW-Source: 2013-11/txt/msg00150.txt.bz2 http://sourceware.org/bugzilla/show_bug.cgi?id=16159 --- Comment #20 from Rich Felker --- On Thu, Nov 14, 2013 at 04:47:48PM +0000, neleai at seznam dot cz wrote: > These also count as I wanted to show a relative performance impact. If I agree this approach makes sense, but the relative performance impact could change when the program (possibly linked with libgcc_s) is invoked via posix_spawn or vfork+exec from a high-load server versus as part of an inefficient shell script where the shell may have a lot of additional syscall overhead on each command (this might also vary between shells; dash or busybox ash might perform very differently from bash). So while we may not care about the most extreme impact, I think it's important to consider how large the relative overhead is when the invocation conditions are a low-overhead, real-world scenario. > this is taken into extreme we could improve performance by staticaly linking lm > and lpthread Yes, of course -- actually, I would recommend merging all of the glibc .so's into libc.so, but I understand that the current situation with symbol versions greatly complicates this, and that there might be other issues. It would certainly improve load-time performance and memory overhead for small programs, though. But I think this is outside the scope of this bug report. The interest in looking at performance here is asking whether a proposed change would make performance noticably worse (a regression), not how we can best optimize startup performance. -- You are receiving this mail because: You are on the CC list for the bug.