From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 29684 invoked by alias); 10 Sep 2018 12:01:17 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Received: (qmail 29657 invoked by uid 89); 10 Sep 2018 12:01:17 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-3.5 required=5.0 tests=AWL,BAYES_00,RCVD_IN_DNSWL_LOW,SPF_PASS autolearn=ham version=3.3.2 spammy=promoting, concurrently, Bye, H*MI:a745 X-HELO: mx0a-001b2d01.pphosted.com Subject: Re: [PING][PATCH] Fix race in pthread_mutex_lock while promoting to PTHREAD_MUTEX_ELISION_NP [BZ #23275] To: libc-alpha@sourceware.org References: <4614fea8-43b9-0125-1346-603ad8ad66e4@linux.ibm.com> <8e0d0ec6-5071-1d15-1f40-20a4928871a0@linux.ibm.com> <79443e7f-ed44-4602-4555-dc179657ee9b@linux.ibm.com> <60a40b84-2d62-82cd-12df-b6719fdd6f9d@linux.ibm.com> <9244dcb4-bb9b-6faa-f2d9-e43b48d37ea0@linux.ibm.com> From: Stefan Liebler Date: Mon, 10 Sep 2018 12:01:00 -0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <9244dcb4-bb9b-6faa-f2d9-e43b48d37ea0@linux.ibm.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit x-cbid: 18091012-0028-0000-0000-000002F7201B X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18091012-0029-0000-0000-000023B0B5B9 Message-Id: <841fbc70-a745-d8d4-4617-044051cbff61@linux.ibm.com> X-SW-Source: 2018-09/txt/msg00094.txt.bz2 PING On 09/03/2018 09:09 AM, Stefan Liebler wrote: > PING > > On 08/27/2018 11:12 AM, Stefan Liebler wrote: >> PING >> Can anybody have a look, please? >> >> On 07/30/2018 09:21 AM, Stefan Liebler wrote: >>> PING >>> On 07/23/2018 08:42 AM, Stefan Liebler wrote: >>>> PING >>>> Please have a look at the patch posted here: >>>> https://www.sourceware.org/ml/libc-alpha/2018-06/msg00246.html >>>> >>>> On 07/16/2018 01:56 PM, Stefan Liebler wrote: >>>>> PING >>>>> >>>>> On 07/10/2018 08:33 AM, Stefan Liebler wrote: >>>>>> PING >>>>>> >>>>>> On 07/03/2018 08:28 AM, Stefan Liebler wrote: >>>>>>> PING >>>>>>> >>>>>>> On 06/26/2018 08:45 AM, Stefan Liebler wrote: >>>>>>>> PING >>>>>>>> >>>>>>>> On 06/19/2018 09:45 AM, Stefan Liebler wrote: >>>>>>>>> PING >>>>>>>>> >>>>>>>>> On 06/12/2018 04:24 PM, Stefan Liebler wrote: >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> The race leads either to pthread_mutex_destroy returning EBUSY >>>>>>>>>> or triggering an assertion (See description in bugzilla). >>>>>>>>>> >>>>>>>>>> This patch is fixing the race by ensuring that the elision >>>>>>>>>> path is used in all cases if elision is enabled by the >>>>>>>>>> GLIBC_TUNABLES framework. >>>>>>>>>> >>>>>>>>>> The __kind variable in struct __pthread_mutex_s is accessed >>>>>>>>>> concurrently. Therefore we are now using the atomic macros. >>>>>>>>>> >>>>>>>>>> The new testcase tst-mutex10 is triggering the race on s390x >>>>>>>>>> and intel. Presumably also on power, but I don't have access >>>>>>>>>> to a power machine with lock-elision. At least the code for >>>>>>>>>> power is the same as on the other two architectures. Can >>>>>>>>>> somebody test it on power? >>>>>>>>>> >>>>>>>>>> Bye >>>>>>>>>> Stefan >>>>>>>>>> >>>>>>>>>> ChangeLog: >>>>>>>>>> >>>>>>>>>>      [BZ #23275] >>>>>>>>>>      * nptl/tst-mutex10.c: New File. >>>>>>>>>>      * nptl/Makefile (tests): Add tst-mutex10. >>>>>>>>>>      (tst-mutex-ENV): New variable. >>>>>>>>>>      * sysdeps/unix/sysv/linux/s390/force-elision.h: >>>>>>>>>> (FORCE_ELISION): >>>>>>>>>>      Ensure that elision path is used if elision is available. >>>>>>>>>>      * sysdeps/unix/sysv/linux/powerpc/force-elision.h >>>>>>>>>>      (FORCE_ELISION): Likewise. >>>>>>>>>>      * sysdeps/unix/sysv/linux/x86/force-elision.h: >>>>>>>>>> (FORCE_ELISION): >>>>>>>>>>      Likewise. >>>>>>>>>>      * nptl/pthreadP.h (PTHREAD_MUTEX_TYPE, >>>>>>>>>>      PTHREAD_MUTEX_TYPE_ELISION, PTHREAD_MUTEX_PSHARED): >>>>>>>>>>      Use atomic_load_relaxed. >>>>>>>>>>      * nptl/pthread_mutex_consistent.c >>>>>>>>>> (pthread_mutex_consistent): >>>>>>>>>>      Likewise. >>>>>>>>>>      * nptl/pthread_mutex_getprioceiling.c >>>>>>>>>>      (pthread_mutex_getprioceiling): Likewise. >>>>>>>>>>      * nptl/pthread_mutex_lock.c (__pthread_mutex_lock_full, >>>>>>>>>>      __pthread_mutex_cond_lock_adjust): Likewise. >>>>>>>>>>      * nptl/pthread_mutex_setprioceiling.c >>>>>>>>>>      (pthread_mutex_setprioceiling): Likewise. >>>>>>>>>>      * nptl/pthread_mutex_timedlock.c >>>>>>>>>> (__pthread_mutex_timedlock): >>>>>>>>>>      Likewise. >>>>>>>>>>      * nptl/pthread_mutex_trylock.c (__pthread_mutex_trylock): >>>>>>>>>>      Likewise. >>>>>>>>>>      * nptl/pthread_mutex_unlock.c (__pthread_mutex_unlock_full): >>>>>>>>>>      Likewise. >>>>>>>>>>      * sysdeps/nptl/bits/thread-shared-types.h >>>>>>>>>>      (struct __pthread_mutex_s): Add comments. >>>>>>>>>>      * nptl/pthread_mutex_destroy.c (__pthread_mutex_destroy): >>>>>>>>>>      Use atomic_load_relaxed and atomic_store_relaxed. >>>>>>>>>>      * nptl/pthread_mutex_init.c (__pthread_mutex_init): >>>>>>>>>>      Use atomic_store_relaxed. >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> >