From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 56456 invoked by alias); 4 Jan 2018 17:15:55 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Received: (qmail 56389 invoked by uid 89); 4 Jan 2018 17:15:50 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-26.9 required=5.0 tests=BAYES_00,GIT_PATCH_0,GIT_PATCH_1,GIT_PATCH_2,GIT_PATCH_3,KAM_SHORT,SPF_HELO_PASS,T_RP_MATCHES_RCVD autolearn=ham version=3.3.2 spammy= X-HELO: mx1.redhat.com Message-ID: <1515086144.9480.286.camel@redhat.com> Subject: Re: [PATCH] Implement libc_once_retry for atomic initialization with allocation From: Torvald Riegel To: Florian Weimer Cc: GNU C Library , "Dmitry V. Levin" Date: Thu, 04 Jan 2018 17:15:00 -0000 In-Reply-To: <6e4b6b22-3f38-f798-f2e0-eb22311ed6b5@redhat.com> References: <6e4b6b22-3f38-f798-f2e0-eb22311ed6b5@redhat.com> Content-Type: text/plain; charset="UTF-8" Mime-Version: 1.0 Content-Transfer-Encoding: 7bit X-SW-Source: 2018-01/txt/msg00135.txt.bz2 On Thu, 2018-01-04 at 16:09 +0100, Florian Weimer wrote: > Subject: [PATCH] Implement libc_once_retry for atomic initialization with allocation Can you find a different name for it? It has similarities to a generic libc_once, but the pattern is more special. Do you need it often enough to make it worth breaking it out? > diff --git a/include/atomic.h b/include/atomic.h > index 6af07dba58..91e176b040 100644 > --- a/include/atomic.h > > +++ b/include/atomic.h I don't think this should go into atomic.h because this is where we really handle the basics of synchronization. This is a synchronization helper, and doesn't need to be touched by someone porting glibc to another arch. > @@ -826,4 +826,58 @@ void __atomic_link_error (void); > # error ATOMIC_EXCHANGE_USES_CAS has to be defined. > #endif > > +/* Slow path for libc_once_retry; see below. */ > > +void *__libc_once_retry_slow (void **__place, > > + void *(*__allocate) (void *__closure), > > + void (*__deallocate) (void *__closure, > > + void *__ptr), > > + void *__closure); > > + > > +/* Perform an acquire MO load on *PLACE. If *PLACE is not NULL, > > + return *PLACE. Otherwise, call ALLOCATE (CLOSURE). If that > > + returns NULL, return NULL. Otherwise, atomically replace *PLACE > > + with PTR, the result of the ALLOCATE call (with acquire-release That doesn't quite match what you implement. First, you don't replace it unconditionally, but use a CAS (you can't just throw in an "atomically" here because the allocations are mixed in and they aren't). Second, you use the weak CAS but the loop includes alloc/dealloc; see below. > + MO). If *PLACE was updated concurrently, call DEALLOCATE (CLOSURE, > > + PTR) to undo the effect of allocate, and return the new value of > > + *PLACE. If DEALLOCATE is NULL, call the free (PTR) instead. s/the // > > + > > + It is expected that callers define an inline helper function > > + function which adds type safety, like this. s/function function which/function that/ s/./:/ > > + > > + struct foo { ... }; > > + struct foo *global_foo; > > + static void *allocate_foo (void *closure); > > + static void *deallocate_foo (void *closure, void *ptr); > > + > > + static inline struct foo * > > + get_foo (void) > > + { > > + return __libc_once_retry (&global_foo, allocate_foo, free_foo, NULL); > > + } > > + > > + Usage of this function looks like this: > > + > > + struct foo *local_foo = get_foo (); > > + if (local_foo == NULL) > > + report_allocation_failure (); > > + > > + Compare to __libc_once, __libc_once_retry has the advantage that it Compare_d_ > + does not need separate space for a control variable, and that it is > > + safe with regards to cancellation and other forms of exception > > + handling if the provided callback functions are safe. */ > > +static inline void * > > +libc_once_retry (void **__place, void *(*__allocate) (void *__closure), > > + void (*__deallocate) (void *__closure, void *__ptr), > > + void *__closure) > > +{ > > + /* Synchronizes with the release-store CAS in s/release-store/release MO/ > diff --git a/misc/libc_once_retry.c b/misc/libc_once_retry.c > new file mode 100644 > index 0000000000..ecd352e2a3 > --- /dev/null > > +++ b/misc/libc_once_retry.c > > @@ -0,0 +1,55 @@ > +/* Concurrent initialization of a pointer. > > + Copyright (C) 2018 Free Software Foundation, Inc. > > + This file is part of the GNU C Library. > > + > > + The GNU C Library is free software; you can redistribute it and/or > > + modify it under the terms of the GNU Lesser General Public > > + License as published by the Free Software Foundation; either > > + version 2.1 of the License, or (at your option) any later version. > > + > > + The GNU C Library is distributed in the hope that it will be useful, > > + but WITHOUT ANY WARRANTY; without even the implied warranty of > > + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU > > + Lesser General Public License for more details. > > + > > + You should have received a copy of the GNU Lesser General Public > > + License along with the GNU C Library; if not, see > > + . */ > > + > > +#include > > +#include > > + > > +void * > > +__libc_once_retry_slow (void **place, void *(*allocate) (void *closure), > > + void (*deallocate) (void *closure, void *ptr), > > + void *closure) > > +{ > > + void *result; > > + > > + do > > + { > > + result = allocate (closure); > > + if (result == NULL) > > + return NULL; > > + > > + /* Synchronizes with the acquire MO load in > > + __libc_once_retry. */ > > + void *expected = NULL; > > + if (atomic_compare_exchange_weak_release (place, &expected, result)) > > + return result; It seems you're looking for a strong CAS here, so why don't you make a loop around the weak CAS? Using an acq_rel CAS, this is simpler. You'd avoid having more than one allocate/deallocate pair, which might be unexpected given that you're documentation of the function's semantics doesn't make that clear (and it might perhaps matter for custom alloc/dealloc functions).