From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 7850 invoked by alias); 22 May 2014 14:40:54 -0000 Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org Received: (qmail 7838 invoked by uid 89); 22 May 2014 14:40:54 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.6 required=5.0 tests=AWL,BAYES_00,RP_MATCHES_RCVD autolearn=ham version=3.3.2 X-HELO: glazunov.sibelius.xs4all.nl Received: from sibelius.xs4all.nl (HELO glazunov.sibelius.xs4all.nl) (83.163.83.176) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES256-GCM-SHA384 encrypted) ESMTPS; Thu, 22 May 2014 14:40:53 +0000 Received: from glazunov.sibelius.xs4all.nl (kettenis@localhost [127.0.0.1]) by glazunov.sibelius.xs4all.nl (8.14.5/8.14.3) with ESMTP id s4MEeEmq002177; Thu, 22 May 2014 16:40:15 +0200 (CEST) Received: (from kettenis@localhost) by glazunov.sibelius.xs4all.nl (8.14.5/8.14.3/Submit) id s4MEeEbx021165; Thu, 22 May 2014 16:40:14 +0200 (CEST) Date: Thu, 22 May 2014 14:40:00 -0000 Message-Id: <201405221440.s4MEeEbx021165@glazunov.sibelius.xs4all.nl> From: Mark Kettenis To: gbenson@redhat.com CC: tromey@redhat.com, palves@redhat.com, fw@deneb.enyo.de, mark.kettenis@xs4all.nl, gdb-patches@sourceware.org In-reply-to: <20140522140904.GD15598@blade.nx> (message from Gary Benson on Thu, 22 May 2014 15:09:04 +0100) Subject: Re: [PATCH 0/2] Demangler crash handler References: <20140509100656.GA4760@blade.nx> <201405091120.s49BKO1f010622@glazunov.sibelius.xs4all.nl> <87fvkhjqvs.fsf@mid.deneb.enyo.de> <53737737.2030901@redhat.com> <87ppj8s7my.fsf@fleche.redhat.com> <20140522140904.GD15598@blade.nx> X-SW-Source: 2014-05/txt/msg00548.txt.bz2 > Date: Thu, 22 May 2014 15:09:04 +0100 > From: Gary Benson > > Tom Tromey wrote: > > Pedro> Then stealing a signal handler always has multi-threading > > Pedro> considerations. E.g., gdb Python code could well spawn a > > Pedro> thread that happens to call something that wants its own > > Pedro> SIGSEGV handler... Signal handlers are per-process, not > > Pedro> per-thread. > > > > That is true in theory but I think it is unlikely in practice. And, > > should it happen -- well, the onus is on folks writing extensions > > not to mess things up. That's the nature of the beast. And, sure, > > it is messy, particularly if we ever upstream "import gdb", but even > > so, signals are just fraught and this is not an ordinary enough > > usage to justify preventing gdb from doing it. > > GDB installs handlers for INT, TERM, QUIT, HUP, FPE, WINCH, CONT, > TTOU, TRAP, ALRM and TSTP, and some other platform-specific ones > I didn't recognise. Is there anything that means SIGSEGV should > be treated differently to all these other signals? >From that list SIGFPE is probably a bogosity. I don't think the SIGFPE handler will do the right thing on many OSes and architectures supported by GDB, since it is unspecified whether the trapping instruction will be re-executed upon return from the signal handler. I'd argue that the SIGFPE handler is just as unhelpful as the SIGSEGV handler you're proposing. Luckily, we don't seem to have a lot of division-by-zero bugs in the code base. > > The choice is really between SEGV catching and "somebody else > > down the road fixes more demangler bugs". > > The demangler bugs will get fixed one way or another. The choice is: > do we allow users to continue to use GDB while the bug they've hit is > fixed, or, do we make them wait? In the expectation that they will > put their own work aside while they fix GDB instead? Unless there is a way to force a core dump (like internal_error() offers) with the state at the point of the SIGSEGV in it, yes, we need to make them wait or fix it themselves. I'd really like to avoid adding a SIGSEGV handler altogether. But I'm willing to compromise if the signal handler offers to opportunity to create a core dump. Now doing so in a signal-safe way will be a bit tricky of course.