From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 9638 invoked by alias); 28 May 2005 01:30:15 -0000 Mailing-List: contact pthreads-win32-help@sources.redhat.com; run by ezmlm Precedence: bulk List-Subscribe: List-Archive: List-Post: List-Help: , Sender: pthreads-win32-owner@sources.redhat.com Received: (qmail 9622 invoked by uid 22791); 28 May 2005 01:30:09 -0000 Received: from mta1.srv.hcvlny.cv.net (HELO mta1.srv.hcvlny.cv.net) (167.206.4.196) by sourceware.org (qpsmtpd/0.30-dev) with ESMTP; Sat, 28 May 2005 01:30:09 +0000 Received: from vbook (ool-182dac10.dyn.optonline.net [24.45.172.16]) by mta1.srv.hcvlny.cv.net (iPlanet Messaging Server 5.2 HotFix 1.25 (built Mar 3 2004)) with ESMTP id <0IH600AMPEVY5U@mta1.srv.hcvlny.cv.net> for pthreads-win32@sources.redhat.com; Fri, 27 May 2005 21:31:10 -0400 (EDT) Date: Sat, 28 May 2005 01:30:00 -0000 From: Vladimir Kliatchko Subject: RE: New pthread_once implementation In-reply-to: <97ffb3105052709425ce1126a@mail.gmail.com> To: 'Gottlob Frege' Cc: 'Ross Johnson' , pthreads-win32@sources.redhat.com Message-id: <0IH600AMQEVY5U@mta1.srv.hcvlny.cv.net> MIME-version: 1.0 Content-type: text/plain; charset=us-ascii Content-transfer-encoding: 7BIT X-SW-Source: 2005/txt/msg00095.txt.bz2 Nice catch. Let me see if I can fix it. Note that the same problem exists in the currently released event-based implementation (cvs version 1.16): thread1 comes in, start initing thread2 creates event, starts waiting thread3 comes in starts waiting thread1 is cancelled, signals event thread2 wakes up, proceeds to the point right before the resetEvent thread3 wakes up, closes event handle thread2 resets closed handle Re: you previous message: >"If only one thread ever comes in, and is canceled in the init_routine, >then the semaphore is never cleaned up." If only one thread ever comes in, and is canceled in the init_routine, then the semaphore is never created to begin with, right? Also, regarding my previous comment to Ross about very high cost of using InterlockedExchangeAdd for MBR: I did some simple benchmarking. Running pthread_once 50,000,000 on my pretty slow single CPU machine takes about 2.1 seconds. Replacing InterlockedExchangeAdd with simple read brings it down to 0.6 seconds. This looks significant. -----Original Message----- From: Gottlob Frege [mailto:gottlobfrege@gmail.com] Sent: Friday, May 27, 2005 12:42 PM To: Ross Johnson Cc: Vladimir Kliatchko Subject: Re: New pthread_once implementation thread1 comes in, start initing thread2 creates sema and waits thread1 starts to cancel - resets control->state thread3 comes in, goes into init thread4 comes in, goes into else block thread1 finishes cancel - releases semaphore thread2 wakes up thread2 decrements numSemaUsers to 0 thread4 increments numSemaUsers thread4 does NOT set new semaphore thread2 closes semaphore thread4 tries to wait on closed semaphore... On 5/27/05, Ross Johnson wrote: > Guys, > > Is there anything you want to change before I cast a new release? > > http://sources.redhat.com/cgi-bin/cvsweb.cgi/pthreads/pthread_once.c? > rev=1.18&content-type=text/x-cvsweb-markup&cvsroot=pthreads-win32 > > Thanks. > Ross > > On Fri, 2005-05-27 at 08:39 -0500, Tim Theisen wrote: > > I picked up the latest and compiled with VC 7.1. All tests passed. > > Then, I ran 100 iterations of the once [1-4] tests. These tests passed > > as well. So, it has my stamp of approval. > > > > ...Tim > > -- > > Tim Theisen Lead Research Software Engineer > > Phone: +1 608 824 2848 TomoTherapy Incorporated > > Fax: +1 608 824 2996 1240 Deming Way > > Web: http://www.tomotherapy.com Madison, WI 53717-1954 > > > > > > > > -----Original Message----- > > From: Ross Johnson [mailto:ross.johnson@homemail.com.au] > > Sent: Friday, May 27, 2005 02:44 > > To: Tim Theisen > > Cc: Vladimir Kliatchko; Gottlob Frege > > Subject: New pthread_once implementation > > > > > > Hi Tim, > > > > The current CVS head contains the latest and much simpler implementation > > of pthread_once just presented on the mailing list. It passes on a UP > > machine as usual but no-one has run it through an MP system yet. Could > > you when you get time? > > > > Thanks. > > Ross > > > > > > > >