From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 14640 invoked by alias); 8 Nov 2019 16:22:26 -0000 Mailing-List: contact elfutils-devel-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Post: List-Help: List-Subscribe: Sender: elfutils-devel-owner@sourceware.org Received: (qmail 14624 invoked by uid 89); 8 Nov 2019 16:22:26 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Checked: by ClamAV 0.100.3 on sourceware.org X-Virus-Found: No X-Spam-SWARE-Status: No, score=-6.5 required=5.0 tests=AWL,BAYES_00,SPF_PASS autolearn=ham version=3.3.1 spammy=pools X-Spam-Status: No, score=-6.5 required=5.0 tests=AWL,BAYES_00,SPF_PASS autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on sourceware.org X-Spam-Level: X-HELO: gnu.wildebeest.org Received: from wildebeest.demon.nl (HELO gnu.wildebeest.org) (212.238.236.112) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Fri, 08 Nov 2019 16:22:23 +0000 Received: from tarox.wildebeest.org (tarox.wildebeest.org [172.31.17.39]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by gnu.wildebeest.org (Postfix) with ESMTPSA id 9979130014AB; Fri, 8 Nov 2019 17:22:21 +0100 (CET) Received: by tarox.wildebeest.org (Postfix, from userid 1000) id 5505040006C6; Fri, 8 Nov 2019 17:22:21 +0100 (CET) Message-ID: Subject: Re: [PATCH 2/2] libdw: Rewrite the memory handler to be more robust. From: Mark Wielaard To: Jonathon Anderson Cc: elfutils-devel@sourceware.org Date: Fri, 08 Nov 2019 16:22:00 -0000 In-Reply-To: <1573150409.2173.1@rice.edu> References: <1572380520.19948.0@rice.edu> <20191029211437.3268-1-mark@klomp.org> <20191029211437.3268-2-mark@klomp.org> <93a4d8983b6ec43c09c6c3b3f6ed8d358321bb9d.camel@klomp.org> <1573150409.2173.1@rice.edu> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Mailer: Evolution 3.28.5 (3.28.5-5.el7) Mime-Version: 1.0 X-Spam-Flag: NO X-IsSubscribed: yes X-SW-Source: 2019-q4/txt/msg00116.txt.bz2 Hi, On Thu, 2019-11-07 at 12:13 -0600, Jonathon Anderson wrote: > On Thu, Nov 7, 2019 at 18:20, Mark Wielaard wrote: > > Do we really need this? > > We already use __thread unconditionally in the rest of the code. > > The usage of threads.h seems to imply we actually want C11 > > _Thread_local. Is that what you really want, or can we just use > > __thread in libdw_alloc.c for thread_id? >=20 > We don't really need it, I just got in the habit of writing=20 > thread_local (and, proper C11 compat). __thread is perfectly fine > for thread_id. Great, removed. > > I think if you include helgrind.h you won't get the drd.h > > ANNOTATE_HAPPENS_BEFORE/AFTER. So do you also need to include > > drd.h? >=20 > Not really, just another habit. Since this is file only needs > HAPPENS_* helgrind.h is sufficient. Thanks. drd.h include removed. > >=20 > > > +#else > > > +#define ANNOTATE_HAPPENS_BEFORE(X) > > > +#define ANNOTATE_HAPPENS_AFTER(X) > > > +#endif > >=20 > > Could you explain the usage of the happens_before/after annotations in > > this code. I must admit that I don't fully understand why/how it works > > in this case. Specifically since realloc might change the address that > > mem_tails points to. >=20 > Reader-writer locks ensure no "readers" are present whenever a "writer"=20 > is around. In this case we use the "write" side for resizing mem_tails=20 > and the "read" side when mem_tails needs to stay stable. Which is why=20 > most of the time we have a read lock and then promote to a write lock=20 > when we need to reallocate. >=20 > The annotations are to clean up a minor deficiency in Helgrind: for=20 > whatever reason if you do writes under a read lock it reports races=20 > with the writes from under the write lock (in this case,=20 > __libdw_allocate and the realloc). I haven't dug deep enough to know=20 > exactly why it happens, just that it does and adding this H-B arc seems=20 > to fix the issue. OK, lets keep them in for now. They are disabled by default anyway. For now people who want a "helgrindable" libdw will need to rebuild libdw with them enabled. > > > +#define THREAD_ID_UNSET ((size_t) -1) > > > +static thread_local size_t thread_id =3D THREAD_ID_UNSET; > > > +static atomic_size_t next_id =3D ATOMIC_VAR_INIT(0); > >=20 > > OK, but maybe use static __thread size_t thread_id as explained > > above? >=20 > Fine by me. Done. > > O, and I now think you would then also need something for dwarf_begin > > to reset any set thread_ids... bleah. So probably way too complicated. > > So lets not, unless you think this is actually simple. >=20 > Which is why I didn't want to do that. >=20 > The other option was to have a sort of free list for ids, but in that=20 > case the cleanup isn't great (sometime after all threads have=20 > completed... if you consider detached threads things get hairy). Plus=20 > it requires a fully concurrent stack or queue, which is a complicated=20 > data structure itself. Yeah, agreed, lets keep it with a simple monotonically increasing next_id. Things need to get really big before this ever gets a problem. And I don't think programs will keep spawning new threads and using Dwarfs on each of them anyway. I expect longer running processes that do need to handle Dwarfs in a concurrent fashion to use thread pools. Pushed with the small changes noted above. Thanks, Mark