public inbox for pthreads-win32@sourceware.org
 help / color / mirror / Atom feed
* RE: semaphores and handle leaks
@ 2006-12-05 21:25 Ye Liu
  0 siblings, 0 replies; 5+ messages in thread
From: Ye Liu @ 2006-12-05 21:25 UTC (permalink / raw)
  To: Morgan McLeod, pthreads-win32

Would u please remove my email from the mail list?

Thanks,

Liu Ye
 

-----Original Message-----
From: pthreads-win32-owner@sourceware.org
[mailto:pthreads-win32-owner@sourceware.org] On Behalf Of Morgan McLeod
Sent: Tuesday, December 05, 2006 1:14 PM
To: pthreads-win32@sources.redhat.com
Subject: semaphores and handle leaks

Hello all,

I've spent the last couple of days redesigning part of my application to

work around what seems like a handle leak when using semaphores.   With 
my previous design they were leaking very rapidly.  With my new design 
it is much slower but still troubling.   I'll give the gist of my 
application here so if I'm doing something obviously wrong maybe one of
you can point it out to me.  Then I'll go back to trying to make a small
sample program which exhibits the bug.

My application is a DLL to be called from a LabVIEW application or from
a C or C++ test program.

I'm using GCC and MinGW32:

L:\cpp\FrontEndControl2>g++ -v
Reading specs from C:/system/mingw/bin/../lib/gcc/mingw32/3.4.2/specs
Configured with: ../gcc/configure --with-gcc --with-gnu-ld --with-gnu-as
--host=mingw32 --target=mingw32 --prefix=/mingw --enable-threads
--disable-nls --enable-languages=c,c++,f77,ada,objc,java
--disable-win32-registry --disable-shared --enable-sjlj-exceptions
--enable-libgcj --disable-java-awt --without-x --enable-java-gc=boehm
--disable-libgcj-debug --enable-interpreter
--enable-hash-synchronization --enable-libstdcxx-debug Thread model:
win32 gcc version 3.4.2 (mingw-special)

I've got the lastest pthreadGC2.dll, and libpthreadGC2.a from
http://sources.redhat.com/pthreads-win32/

There are several "monitor" threads.  Each one creates semaphores on the
stack, around 5-15 times every 100 ms:

    sem_t synchLock;
    sem_init(&synchLock, 0, 0);
    monitor(AMBSI_TEMPERATURE, dataLength, data, &synchLock, &timestamp,
&status);
    sem_wait(&synchLock);
    sem_destroy(&synchLock);   

The pointer to the semaphore is put along with other data in a queue.

In my original design, a new thread would be launched for every item in
the queue.  These threads would save the pointer to the caller's
semaphore, create a new one on the local stack, substitute it for the
caller's, and then add the data to a queue for transmission on a CAN
bus.  Once it has been sent, the CAN bus handler will sem_post:

        // Create a unique semaphore sem2:
        sem_t sem2;
        sem_init(&sem2, 0, 0);
       
        // Substitute sem2 for the sempaphore in the caller's completion
structure:
        sem_t *sem1 = msg.completion_p -> synchLock_p;
        msg.completion_p -> synchLock_p = &sem2;

        // Send the message:
        canBus_mp -> sendMessage(msg);
       
        // Wait on sem2:
        sem_wait(&sem2);
        sem_destroy(&sem2);

        // [ make a local copy of the data]

        // Put back the caller's semaphore, if any, and signal on 
it:       
        msg.completion_p -> synchLock_p = sem1;
        if (sem1)
            sem_post(sem1);

       // [ log the transaction ]

The idea here is that this thread can take all the time it needs to log 
the transaction to a database without holding up the caller's thread.   
As I said, this design was leaking handles at a rapid clip.  It seemed 
like 2 handles per message were leaked -- hundreds every second.   Using

gdb I traced the leaks to happening in sem_init calls.

Since creating all those threads was kind of a dumb design, I've changed

it to a more conventional design.   Now instead of one thread per 
message, there is a single worker thread and a circular buffer for
holding the messages.  It still is working in basically the same way,
though.  A fixed number of semaphores are preallocated and sem_init-ed 
by in the buffer.   These are substituted for the caller's semaphore as 
above.

This design still leaks handles, only much slower.   At full load of 
 >300 messages / second, it leaks 6 to 10 handles per second.   At a 
reduced load of 15 messages every 5 seconds, it leaks 2 handles every 30
seconds or so.

Does anything jump out as being wrong that I'm doing?   I'll try to get 
a simple test program that demonstrates this sometime in the next few
days.

Thanks for your consideration.

-Morgan McLeod
Software Engineer
National Radio Astronomy Observatory
Charlottesville, Va


^ permalink raw reply	[flat|nested] 5+ messages in thread
* Patch of current cvs for WinCE
@ 2006-11-28 16:22 Marcel Ruff
  2006-11-28 16:27 ` Marcel Ruff
  0 siblings, 1 reply; 5+ messages in thread
From: Marcel Ruff @ 2006-11-28 16:22 UTC (permalink / raw)
  To: pthreads-win32

Hi,

with some minor changes the pthreads CVS snapshot from
today (2006-11-28 PTW32_VERSION 2,8,0,0) compiles with

  Visual C++ 8.0.5 2005 (on a XP)

for the target

  Windows CE 4.2, Smartphone 2003 with ARM4 processor.

My multi threaded application seem to run fine, but
it keeps getting strange return codes for

  pthread_cond_destroy()
  pthread_mutex_destroy()

those calls return 16 (which is EBUSY) in need_errno.h.
I ignore it and my applications continues as expected.

Most probably this (my) hack is the reason:

What do i have to return in pthread_cancel.c instead of FpExc??
43a44,47
 > #if defined(WINCE)
 > #define PTW32_PROGCTR(Context)  ((Context).FpExc)
 > #endif
(Context is defined in my winnt.h)


What is the role of process.h (as i have disabled it, see patch below)?

In errno.c there are pthread_self () returns pthread_t
which is sometimes a pointer on a big struct, whereas in
my case it ended up to
typedef struct {
    void * p;          /* Pointer to actual object */
    unsigned int x;    /* Extra information - reuse count etc */
} ptw32_handle_t;



How can i track this down?

thank you
Marcel




====== the patch =======
Index: create.c
===================================================================
RCS file: /cvs/pthreads-win32/pthreads/create.c,v
retrieving revision 1.63
diff -r1.63 create.c
41c41,43
< #include <process.h>
---
 > #  ifndef WINCE
 > #    include <process.h>
 > #  endif
Index: errno.c
===================================================================
RCS file: /cvs/pthreads-win32/pthreads/errno.c,v
retrieving revision 1.13
diff -r1.13 errno.c
76a77,90
 > # ifdef WINCE
 >   if ((self = pthread_self ()).p == NULL)
 >     {
 >       /*
 >        * Yikes! unable to allocate a thread!
 >        * Throw an exception? return an error?
 >        */
 >       result = &reallyBad;
 >     }
 >   else
 >     {
 >         result = 0; /* &(self.x) Which errno is appropriate? */
 >     }
 > # else
88a103
 > # endif
Index: exit.c
===================================================================
RCS file: /cvs/pthreads-win32/pthreads/exit.c,v
retrieving revision 1.40
diff -r1.40 exit.c
41c41,43
< #   include <process.h>
---
 > #   ifndef WINCE
 > #      include <process.h>
 > #   endif
Index: implement.h
===================================================================
RCS file: /cvs/pthreads-win32/pthreads/implement.h,v
retrieving revision 1.117
diff -r1.117 implement.h
661c661,663
< #   include <process.h>
---
 > #    ifndef WINCE
 > #        include <process.h>
 > #    endif
Index: mutex.c
===================================================================
RCS file: /cvs/pthreads-win32/pthreads/mutex.c,v
retrieving revision 1.63
diff -r1.63 mutex.c
38c38,40
< #   include <process.h>
---
 > #   ifndef WINCE
 > #      include <process.h>
 > #   endif
Index: pthread.h
===================================================================
RCS file: /cvs/pthreads-win32/pthreads/pthread.h,v
retrieving revision 1.135
diff -r1.135 pthread.h
1215c1215,1217
<      PTW32_DLLPORT int * PTW32_CDECL _errno( void );
---
 > #     ifndef WINCE
 >          PTW32_DLLPORT int * PTW32_CDECL _errno( void );
 > #     endif
Index: pthread_cancel.c
===================================================================
RCS file: /cvs/pthreads-win32/pthreads/pthread_cancel.c,v
retrieving revision 1.10
diff -r1.10 pthread_cancel.c
43a44,47
 > #if defined(WINCE)
 > #define PTW32_PROGCTR(Context)  ((Context).FpExc)
 > #endif
 >
Index: pthread_detach.c
===================================================================
RCS file: /cvs/pthreads-win32/pthreads/pthread_detach.c,v
retrieving revision 1.10
diff -r1.10 pthread_detach.c
45c45,47
< #include <signal.h>
---
 > #ifndef WINCE
 > #  include <signal.h>
 > #endif
Index: pthread_join.c
===================================================================
RCS file: /cvs/pthreads-win32/pthreads/pthread_join.c,v
retrieving revision 1.11
diff -r1.11 pthread_join.c
45c45,47
< #include <signal.h>
---
 > #ifndef WINCE
 > #  include <signal.h>
 > #endif
Index: pthread_kill.c
===================================================================
RCS file: /cvs/pthreads-win32/pthreads/pthread_kill.c,v
retrieving revision 1.7
diff -r1.7 pthread_kill.c
44c44,46
< #include <signal.h>
---
 > #ifndef WINCE
 > #   include <signal.h>
 > #endif
Index: pthread_rwlock_destroy.c
===================================================================
RCS file: /cvs/pthreads-win32/pthreads/pthread_rwlock_destroy.c,v
retrieving revision 1.5
diff -r1.5 pthread_rwlock_destroy.c
37c37
< #include <errno.h>
---
 > /*#include <errno.h> is included by pthread.h (Marcel Ruff 2006-11-28) */
Index: pthread_rwlock_init.c
===================================================================
RCS file: /cvs/pthreads-win32/pthreads/pthread_rwlock_init.c,v
retrieving revision 1.5
diff -r1.5 pthread_rwlock_init.c
37c37
< #include <errno.h>
---
 > /*#include <errno.h> is included by pthread.h (Marcel Ruff 2006-11-28) */
Index: pthread_rwlock_rdlock.c
===================================================================
RCS file: /cvs/pthreads-win32/pthreads/pthread_rwlock_rdlock.c,v
retrieving revision 1.5
diff -r1.5 pthread_rwlock_rdlock.c
37c37
< #include <errno.h>
---
 > /*#include <errno.h> is included by pthread.h (Marcel Ruff 2006-11-28) */
Index: pthread_rwlock_timedrdlock.c
===================================================================
RCS file: /cvs/pthreads-win32/pthreads/pthread_rwlock_timedrdlock.c,v
retrieving revision 1.5
diff -r1.5 pthread_rwlock_timedrdlock.c
37c37
< #include <errno.h>
---
 > /*#include <errno.h> is included by pthread.h (Marcel Ruff 2006-11-28) */
Index: pthread_rwlock_timedwrlock.c
===================================================================
RCS file: /cvs/pthreads-win32/pthreads/pthread_rwlock_timedwrlock.c,v
retrieving revision 1.5
diff -r1.5 pthread_rwlock_timedwrlock.c
37c37
< #include <errno.h>
---
 > /*#include <errno.h> is included by pthread.h (Marcel Ruff 2006-11-28) */
Index: pthread_rwlock_tryrdlock.c
===================================================================
RCS file: /cvs/pthreads-win32/pthreads/pthread_rwlock_tryrdlock.c,v
retrieving revision 1.5
diff -r1.5 pthread_rwlock_tryrdlock.c
37c37
< #include <errno.h>
---
 > /*#include <errno.h> is included by pthread.h (Marcel Ruff 2006-11-28) */
Index: pthread_rwlock_trywrlock.c
===================================================================
RCS file: /cvs/pthreads-win32/pthreads/pthread_rwlock_trywrlock.c,v
retrieving revision 1.5
diff -r1.5 pthread_rwlock_trywrlock.c
37c37
< #include <errno.h>
---
 > /*#include <errno.h> is included by pthread.h (Marcel Ruff 2006-11-28) */
Index: pthread_rwlock_unlock.c
===================================================================
RCS file: /cvs/pthreads-win32/pthreads/pthread_rwlock_unlock.c,v
retrieving revision 1.5
diff -r1.5 pthread_rwlock_unlock.c
37c37
< #include <errno.h>
---
 > /*#include <errno.h> is included by pthread.h (Marcel Ruff 2006-11-28) */
Index: pthread_rwlock_wrlock.c
===================================================================
RCS file: /cvs/pthreads-win32/pthreads/pthread_rwlock_wrlock.c,v
retrieving revision 1.6
diff -r1.6 pthread_rwlock_wrlock.c
37c37
< #include <errno.h>
---
 > /*#include <errno.h> is included by pthread.h (Marcel Ruff 2006-11-28) */
Index: pthread_rwlockattr_destroy.c
===================================================================
RCS file: /cvs/pthreads-win32/pthreads/pthread_rwlockattr_destroy.c,v
retrieving revision 1.5
diff -r1.5 pthread_rwlockattr_destroy.c
37c37
< #include <errno.h>
---
 > /*#include <errno.h> is included by pthread.h (Marcel Ruff 2006-11-28) */
Index: pthread_rwlockattr_getpshared.c
===================================================================
RCS file: /cvs/pthreads-win32/pthreads/pthread_rwlockattr_getpshared.c,v
retrieving revision 1.5
diff -r1.5 pthread_rwlockattr_getpshared.c
37c37
< #include <errno.h>
---
 > /*#include <errno.h> is included by pthread.h (Marcel Ruff 2006-11-28) */
Index: pthread_rwlockattr_init.c
===================================================================
RCS file: /cvs/pthreads-win32/pthreads/pthread_rwlockattr_init.c,v
retrieving revision 1.5
diff -r1.5 pthread_rwlockattr_init.c
37c37
< #include <errno.h>
---
 > /*#include <errno.h> is included by pthread.h (Marcel Ruff 2006-11-28) */
Index: pthread_rwlockattr_setpshared.c
===================================================================
RCS file: /cvs/pthreads-win32/pthreads/pthread_rwlockattr_setpshared.c,v
retrieving revision 1.5
diff -r1.5 pthread_rwlockattr_setpshared.c
37c37
< #include <errno.h>
---
 > /*#include <errno.h> is included by pthread.h (Marcel Ruff 2006-11-28) */



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2007-01-08 14:31 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2006-12-05 21:25 semaphores and handle leaks Ye Liu
  -- strict thread matches above, loose matches on Subject: below --
2006-11-28 16:22 Patch of current cvs for WinCE Marcel Ruff
2006-11-28 16:27 ` Marcel Ruff
2006-11-29 11:09   ` Marcel Ruff
2006-12-05 21:14     ` semaphores and handle leaks Morgan McLeod
2006-12-05 23:12       ` Morgan McLeod
2007-01-07  2:30         ` Ross Johnson
2007-01-08 14:31           ` Morgan McLeod

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).