From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 26382 invoked by alias); 21 May 2004 20:14:39 -0000 Mailing-List: contact libc-hacker-help@sources.redhat.com; run by ezmlm Precedence: bulk List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-hacker-owner@sources.redhat.com Received: (qmail 26327 invoked from network); 21 May 2004 20:14:38 -0000 Received: from unknown (HELO sunsite.ms.mff.cuni.cz) (195.113.15.26) by sourceware.org with SMTP; 21 May 2004 20:14:38 -0000 Received: from sunsite.ms.mff.cuni.cz (sunsite.mff.cuni.cz [127.0.0.1]) by sunsite.ms.mff.cuni.cz (8.12.8/8.12.8) with ESMTP id i4LI1I3p031946; Fri, 21 May 2004 20:01:22 +0200 Received: (from jakub@localhost) by sunsite.ms.mff.cuni.cz (8.12.8/8.12.8/Submit) id i4LDMcuS003236; Fri, 21 May 2004 15:22:38 +0200 Date: Mon, 24 May 2004 11:27:00 -0000 From: Jakub Jelinek To: Ulrich Drepper Cc: Glibc hackers Subject: [PATCH] Fix pthread_cond_wait bug on !i386 !x86_64 Message-ID: <20040521132238.GM5191@sunsite.ms.mff.cuni.cz> Reply-To: Jakub Jelinek Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.4i X-SW-Source: 2004-05/txt/msg00037.txt.bz2 Hi! I hope this is the only cause of the random failures of tst-cond{16,17,18} on pthread_cond_*.c using architectures. Certainly on 7way PPC I managed to run 71 tst-cond18 runs without a failure (while previously it took at most 20 to fail) and so far 16 tst-cond16 runs. 2004-05-21 Jakub Jelinek * sysdeps/pthread/pthread_cond_wait.c (__pthread_cond_wait): Compare __broadcast_seq with bc_seq after acquiring internal lock instead of before it. --- libc/nptl/sysdeps/pthread/pthread_cond_wait.c.jj 2004-05-19 22:58:46.000000000 +0200 +++ libc/nptl/sysdeps/pthread/pthread_cond_wait.c 2004-05-21 17:30:54.654099109 +0200 @@ -143,13 +143,13 @@ __pthread_cond_wait (cond, mutex) /* Disable asynchronous cancellation. */ __pthread_disable_asynccancel (cbuffer.oldtype); + /* We are going to look at shared data again, so get the lock. */ + lll_mutex_lock (cond->__data.__lock); + /* If a broadcast happened, we are done. */ if (cbuffer.bc_seq != cond->__data.__broadcast_seq) goto bc_out; - /* We are going to look at shared data again, so get the lock. */ - lll_mutex_lock (cond->__data.__lock); - /* Check whether we are eligible for wakeup. */ val = cond->__data.__wakeup_seq; } Jakub