From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 30889 invoked by alias); 5 Dec 2006 21:25:34 -0000 Received: (qmail 30807 invoked by uid 22791); 5 Dec 2006 21:25:32 -0000 X-Spam-Check-By: sourceware.org Received: from outhub3.tibco.com (HELO outhub3.tibco.com) (63.100.100.166) by sourceware.org (qpsmtpd/0.31) with ESMTP; Tue, 05 Dec 2006 21:25:28 +0000 Received: from na-h-inhub1.tibco.com (na-h-inhub1 [10.106.128.33]) by outhub3.tibco.com (8.12.10/8.12.9) with ESMTP id kB5LPB8H024197; Tue, 5 Dec 2006 13:25:11 -0800 (PST) Received: from NA-PA-VBE01.na.tibco.com (na-pa-be01.tibco.com [10.106.136.121]) by na-h-inhub1.tibco.com (8.13.6/8.13.6) with ESMTP id kB5LPBdc003592; Tue, 5 Dec 2006 13:25:11 -0800 (PST) Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: quoted-printable Subject: RE: semaphores and handle leaks Date: Tue, 05 Dec 2006 21:25:00 -0000 Message-ID: <583A66600C8B6247BFB2DDB6FDEE8537037CD738@NA-PA-VBE01.na.tibco.com> From: "Ye Liu" To: "Morgan McLeod" , X-IsSubscribed: yes Mailing-List: contact pthreads-win32-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: pthreads-win32-owner@sourceware.org X-SW-Source: 2006/txt/msg00067.txt.bz2 Would u please remove my email from the mail list? Thanks, Liu Ye =20 -----Original Message----- From: pthreads-win32-owner@sourceware.org [mailto:pthreads-win32-owner@sourceware.org] On Behalf Of Morgan McLeod Sent: Tuesday, December 05, 2006 1:14 PM To: pthreads-win32@sources.redhat.com Subject: semaphores and handle leaks Hello all, I've spent the last couple of days redesigning part of my application to work around what seems like a handle leak when using semaphores. With=20 my previous design they were leaking very rapidly. With my new design=20 it is much slower but still troubling. I'll give the gist of my=20 application here so if I'm doing something obviously wrong maybe one of you can point it out to me. Then I'll go back to trying to make a small sample program which exhibits the bug. My application is a DLL to be called from a LabVIEW application or from a C or C++ test program. I'm using GCC and MinGW32: L:\cpp\FrontEndControl2>g++ -v Reading specs from C:/system/mingw/bin/../lib/gcc/mingw32/3.4.2/specs Configured with: ../gcc/configure --with-gcc --with-gnu-ld --with-gnu-as --host=3Dmingw32 --target=3Dmingw32 --prefix=3D/mingw --enable-threads --disable-nls --enable-languages=3Dc,c++,f77,ada,objc,java --disable-win32-registry --disable-shared --enable-sjlj-exceptions --enable-libgcj --disable-java-awt --without-x --enable-java-gc=3Dboehm --disable-libgcj-debug --enable-interpreter --enable-hash-synchronization --enable-libstdcxx-debug Thread model: win32 gcc version 3.4.2 (mingw-special) I've got the lastest pthreadGC2.dll, and libpthreadGC2.a from http://sources.redhat.com/pthreads-win32/ There are several "monitor" threads. Each one creates semaphores on the stack, around 5-15 times every 100 ms: sem_t synchLock; sem_init(&synchLock, 0, 0); monitor(AMBSI_TEMPERATURE, dataLength, data, &synchLock, ×tamp, &status); sem_wait(&synchLock); sem_destroy(&synchLock);=20=20=20 The pointer to the semaphore is put along with other data in a queue. In my original design, a new thread would be launched for every item in the queue. These threads would save the pointer to the caller's semaphore, create a new one on the local stack, substitute it for the caller's, and then add the data to a queue for transmission on a CAN bus. Once it has been sent, the CAN bus handler will sem_post: // Create a unique semaphore sem2: sem_t sem2; sem_init(&sem2, 0, 0); =20=20=20=20=20=20=20 // Substitute sem2 for the sempaphore in the caller's completion structure: sem_t *sem1 =3D msg.completion_p -> synchLock_p; msg.completion_p -> synchLock_p =3D &sem2; // Send the message: canBus_mp -> sendMessage(msg); =20=20=20=20=20=20=20 // Wait on sem2: sem_wait(&sem2); sem_destroy(&sem2); // [ make a local copy of the data] // Put back the caller's semaphore, if any, and signal on=20 it:=20=20=20=20=20=20=20 msg.completion_p -> synchLock_p =3D sem1; if (sem1) sem_post(sem1); // [ log the transaction ] The idea here is that this thread can take all the time it needs to log=20 the transaction to a database without holding up the caller's thread.=20=20= =20 As I said, this design was leaking handles at a rapid clip. It seemed=20 like 2 handles per message were leaked -- hundreds every second. Using gdb I traced the leaks to happening in sem_init calls. Since creating all those threads was kind of a dumb design, I've changed it to a more conventional design. Now instead of one thread per=20 message, there is a single worker thread and a circular buffer for holding the messages. It still is working in basically the same way, though. A fixed number of semaphores are preallocated and sem_init-ed=20 by in the buffer. These are substituted for the caller's semaphore as=20 above. This design still leaks handles, only much slower. At full load of=20 >300 messages / second, it leaks 6 to 10 handles per second. At a=20 reduced load of 15 messages every 5 seconds, it leaks 2 handles every 30 seconds or so. Does anything jump out as being wrong that I'm doing? I'll try to get=20 a simple test program that demonstrates this sometime in the next few days. Thanks for your consideration. -Morgan McLeod Software Engineer National Radio Astronomy Observatory Charlottesville, Va