public inbox for pthreads-win32@sourceware.org
 help / color / mirror / Atom feed
* RE: semaphores and handle leaks
@ 2006-12-05 21:25 Ye Liu
  0 siblings, 0 replies; 5+ messages in thread
From: Ye Liu @ 2006-12-05 21:25 UTC (permalink / raw)
  To: Morgan McLeod, pthreads-win32

Would u please remove my email from the mail list?

Thanks,

Liu Ye
 

-----Original Message-----
From: pthreads-win32-owner@sourceware.org
[mailto:pthreads-win32-owner@sourceware.org] On Behalf Of Morgan McLeod
Sent: Tuesday, December 05, 2006 1:14 PM
To: pthreads-win32@sources.redhat.com
Subject: semaphores and handle leaks

Hello all,

I've spent the last couple of days redesigning part of my application to

work around what seems like a handle leak when using semaphores.   With 
my previous design they were leaking very rapidly.  With my new design 
it is much slower but still troubling.   I'll give the gist of my 
application here so if I'm doing something obviously wrong maybe one of
you can point it out to me.  Then I'll go back to trying to make a small
sample program which exhibits the bug.

My application is a DLL to be called from a LabVIEW application or from
a C or C++ test program.

I'm using GCC and MinGW32:

L:\cpp\FrontEndControl2>g++ -v
Reading specs from C:/system/mingw/bin/../lib/gcc/mingw32/3.4.2/specs
Configured with: ../gcc/configure --with-gcc --with-gnu-ld --with-gnu-as
--host=mingw32 --target=mingw32 --prefix=/mingw --enable-threads
--disable-nls --enable-languages=c,c++,f77,ada,objc,java
--disable-win32-registry --disable-shared --enable-sjlj-exceptions
--enable-libgcj --disable-java-awt --without-x --enable-java-gc=boehm
--disable-libgcj-debug --enable-interpreter
--enable-hash-synchronization --enable-libstdcxx-debug Thread model:
win32 gcc version 3.4.2 (mingw-special)

I've got the lastest pthreadGC2.dll, and libpthreadGC2.a from
http://sources.redhat.com/pthreads-win32/

There are several "monitor" threads.  Each one creates semaphores on the
stack, around 5-15 times every 100 ms:

    sem_t synchLock;
    sem_init(&synchLock, 0, 0);
    monitor(AMBSI_TEMPERATURE, dataLength, data, &synchLock, &timestamp,
&status);
    sem_wait(&synchLock);
    sem_destroy(&synchLock);   

The pointer to the semaphore is put along with other data in a queue.

In my original design, a new thread would be launched for every item in
the queue.  These threads would save the pointer to the caller's
semaphore, create a new one on the local stack, substitute it for the
caller's, and then add the data to a queue for transmission on a CAN
bus.  Once it has been sent, the CAN bus handler will sem_post:

        // Create a unique semaphore sem2:
        sem_t sem2;
        sem_init(&sem2, 0, 0);
       
        // Substitute sem2 for the sempaphore in the caller's completion
structure:
        sem_t *sem1 = msg.completion_p -> synchLock_p;
        msg.completion_p -> synchLock_p = &sem2;

        // Send the message:
        canBus_mp -> sendMessage(msg);
       
        // Wait on sem2:
        sem_wait(&sem2);
        sem_destroy(&sem2);

        // [ make a local copy of the data]

        // Put back the caller's semaphore, if any, and signal on 
it:       
        msg.completion_p -> synchLock_p = sem1;
        if (sem1)
            sem_post(sem1);

       // [ log the transaction ]

The idea here is that this thread can take all the time it needs to log 
the transaction to a database without holding up the caller's thread.   
As I said, this design was leaking handles at a rapid clip.  It seemed 
like 2 handles per message were leaked -- hundreds every second.   Using

gdb I traced the leaks to happening in sem_init calls.

Since creating all those threads was kind of a dumb design, I've changed

it to a more conventional design.   Now instead of one thread per 
message, there is a single worker thread and a circular buffer for
holding the messages.  It still is working in basically the same way,
though.  A fixed number of semaphores are preallocated and sem_init-ed 
by in the buffer.   These are substituted for the caller's semaphore as 
above.

This design still leaks handles, only much slower.   At full load of 
 >300 messages / second, it leaks 6 to 10 handles per second.   At a 
reduced load of 15 messages every 5 seconds, it leaks 2 handles every 30
seconds or so.

Does anything jump out as being wrong that I'm doing?   I'll try to get 
a simple test program that demonstrates this sometime in the next few
days.

Thanks for your consideration.

-Morgan McLeod
Software Engineer
National Radio Astronomy Observatory
Charlottesville, Va


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: semaphores and handle leaks
  2007-01-07  2:30         ` Ross Johnson
@ 2007-01-08 14:31           ` Morgan McLeod
  0 siblings, 0 replies; 5+ messages in thread
From: Morgan McLeod @ 2007-01-08 14:31 UTC (permalink / raw)
  To: Ross Johnson; +Cc: Pthreads-Win32 list

Hello Ross, all:

Yes 2.8.0 seems to fix the handle leaks that I was seeing.   I haven't 
tried the workaround you suggest.  I have attached the latest version of 
my test program below, including updated comments indicating handle counts.

Thanks
-MM


Ross Johnson wrote:
> Hi Morgan,
>
> Could you try your sample code below with version 2.8.0 of the 
> library. I believe the leak has been plugged. Sergey Fokin reported a 
> race in sem_destroy() that, in your code below, may result in 
> semaphores not being destroyed.
>
> Where you init and destroy semaphores in thread1 ...
>
> sem_init(E.synchLock, 0, 0);
>  ...
> sem_destroy(E.synchLock);
>
> ... if you were to check the return value from sem_destroy() I believe 
> you would find that errno sometimes returns EBUSY. This bug has been 
> fixed in 2.8.0.
>
> For prior versions of the library, the following modification should 
> provide a workaround (untested):-
>
> while (sem_destroy(E.synchLock) != 0 && errno == EBUSY)
>  {
>    // Assuming can busy-wait on SMP systems - in pthreads-win32 this 
> looks at the number of processors
>    // assigned to the process, which may be <= number in system. Not 
> portable.
>    if (pthread_num_processors_np() < 2)
>      sched_yield();
>  }
>
> Regards.
> Ross



#include <stdio.h>
#include <windows.h>
#include <pthread.h>
#include <semaphore.h>

struct listElem {
    int num;
    sem_t *synchLock;
    
    listElem(int _num = 0, sem_t *_synchLock = NULL)
      : num(_num),
        synchLock(_synchLock)
        {}

    ~listElem()
      {}
};

const bool JOIN_THREADS = false;
const int COUNT = 1000;

listElem list1[COUNT];
listElem list2[COUNT];
int pos1, pos2, end1, end2;

// mutexes to protect the lists:
pthread_mutex_t mutex1;
pthread_mutex_t mutex2;

// flags to tell the threads to stop:
bool shutdownNow;
bool shutdownDone1;
bool shutdownDone2;

// thread 1 processes list1:
void *thread1(void *arg) {
    listElem E;
    while (true) {
        if (shutdownNow) {
            shutdownDone1 = true;
            pthread_exit(NULL);
        }       
        pthread_mutex_lock(&mutex1);
        if (end1 == pos1)
            pthread_mutex_unlock(&mutex1);
        
        else {        
            // get the next element from the list:
            E = list1[pos1++];
            pthread_mutex_unlock(&mutex1);
            
            // save the original semaphore:
            sem_t *sem1 = E.synchLock;
            
            // create and initialize a new semaphore.
            // substitute it for the original:
            sem_t sem2;
            E.synchLock = &sem2;
            sem_init(E.synchLock, 0, 0);
            
            // put the item in list2 for processing by thread2:
            pthread_mutex_lock(&mutex2);
            list2[end2++] = E;
            pthread_mutex_unlock(&mutex2);
            
            // Wait on, then destroy the substitute semaphore:
            sem_wait(E.synchLock);
            sem_destroy(E.synchLock);
            
            // put back and post on the original semaphore:
            E.synchLock = sem1;
            sem_post(E.synchLock);

            printf("thread1: %d done\n", E.num);
        }
        Sleep(10);
    }
}

// thread2 processes list2:
void *thread2(void *arg) {
    listElem E;
    while (true) {
        if (shutdownNow) {
            shutdownDone2 = true;
            pthread_exit(NULL);
        }       
        pthread_mutex_lock(&mutex2);
        if (end2 == pos2)
            pthread_mutex_unlock(&mutex2);
        
        else {
            E = list2[pos2++];
            pthread_mutex_unlock(&mutex2);
            
            sem_post(E.synchLock);

            printf("thread2: %d done\n", E.num);
        }
        Sleep(10);
    }
}

int main(int, char*[]) {
    // Initialize flags and indexes:
    shutdownNow = shutdownDone1 = shutdownDone2 = false;
    pos1 = pos2 = end1 = end2;
    
    pthread_mutex_init(&mutex1, NULL);
    pthread_mutex_init(&mutex2, NULL);

    // Pause to look at Task Manager.  Handles = 10:
    Sleep(5000);
    
    sem_t synchLocks[COUNT];
    for (int index = 0; index < COUNT; ++index) {
        sem_init(&synchLocks[index], 0, 0);
        listElem E(index, &synchLocks[index]);
        list1[end1++] = E;
    }

    // Handles is between 2015 and 2019.  
    // With pthreads_win32 ver. 2.7.0 it would start to leak handles...

    pthread_t T1;    
    pthread_create(&T1, NULL, thread1, NULL);

    pthread_t T2;
    pthread_create(&T2, NULL, thread2, NULL);

    if (!JOIN_THREADS) {
        pthread_detach(T1);
        pthread_detach(T2);
    }
    
    while (end1 > pos1 || end2 > pos2)
        Sleep(10);
    
    // Pause to look at Task Manager:  
    // With 2.7.0 Handles = 2151 with joinable threads (varies)
    //            Handles = 2265 with detached threads (varies)
    // With 2.8.0 Handles holds between 2015 and 2019.
    Sleep(5000);
    
    shutdownNow = true;
    void *tr;
    if (JOIN_THREADS) {
        pthread_join(T1, &tr);
        pthread_join(T2, &tr);
    }
    
    while (!shutdownDone1 && !shutdownDone2)
        Sleep(10);

    for (int index = 0; index < COUNT; ++index)
        sem_destroy(&synchLocks[index]);

    pthread_mutex_destroy(&mutex1);
    pthread_mutex_destroy(&mutex2);

    // Pause to look at Task Manager:
    // With 2.7.0 Handles = 148 with joinable threads (varies)
    //            Handles = 268 with detached threads (varies)
    // With 2.8.0 Handles = 9 or 12.
    Sleep(5000);
    
    printf("done\n");    
    return 0;
}



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: semaphores and handle leaks
  2006-12-05 23:12       ` Morgan McLeod
@ 2007-01-07  2:30         ` Ross Johnson
  2007-01-08 14:31           ` Morgan McLeod
  0 siblings, 1 reply; 5+ messages in thread
From: Ross Johnson @ 2007-01-07  2:30 UTC (permalink / raw)
  To: Morgan McLeod, Pthreads-Win32 list

Hi Morgan,

Could you try your sample code below with version 2.8.0 of the library. 
I believe the leak has been plugged. Sergey Fokin reported a race in 
sem_destroy() that, in your code below, may result in semaphores not 
being destroyed.

Where you init and destroy semaphores in thread1 ...

sem_init(E.synchLock, 0, 0);
  ...
sem_destroy(E.synchLock);

... if you were to check the return value from sem_destroy() I believe 
you would find that errno sometimes returns EBUSY. This bug has been 
fixed in 2.8.0.

For prior versions of the library, the following modification should 
provide a workaround (untested):-

while (sem_destroy(E.synchLock) != 0 && errno == EBUSY)
  {
    // Assuming can busy-wait on SMP systems - in pthreads-win32 this 
looks at the number of processors
    // assigned to the process, which may be <= number in system. Not 
portable.
    if (pthread_num_processors_np() < 2)
      sched_yield();
  }

Regards.
Ross

Morgan McLeod wrote:
> Hello again.
>
> Below is C++ code for a fairly simple program which exhibits the 
> apparent handle leaks I described in my previous posting.   I linked 
> this with the standard STL rather than STLPort and it makes no 
> difference.   This is compiled to an EXE, not a DLL like my real 
> application.
>
> Again, please feel free to point out if I'm doing somethign wrong.
>
> Thanks
>
> -Morgan McLeod
> Software Engineer
> National Radio Astronomy Observatory
> Charlottesville, Va
>
>
> #include <stdio.h>
> #include <windows.h>
> #include <pthread.h>
> #include <semaphore.h>
> #include <list>
>
> struct listElem {
>    int num;
>    sem_t *synchLock;
>      listElem(int _num, sem_t *_synchLock)
>      : num(_num),
>        synchLock(_synchLock)
>        {}
>
>    ~listElem()
>      {}
> };
>
> typedef std::list<listElem> semList_t;
> semList_t list1;
> semList_t list2;
>
> // mutexes to protect the lists:
> pthread_mutex_t mutex1;
> pthread_mutex_t mutex2;
>
> // flags to tell the threads to stop:
> bool shutdownNow;
> bool shutdownDone1;
> bool shutdownDone2;
>
> // thread 1 processes list1:
> void *thread1(void *arg) {
>    while (true) {
>        if (shutdownNow) {
>            shutdownDone1 = true;
>            pthread_exit(NULL);
>        }             pthread_mutex_lock(&mutex1);
>        if (list1.empty())
>            pthread_mutex_unlock(&mutex1);
>              else {                  // remove the front element from 
> the list:
>            listElem E = list1.front();
>            list1.pop_front();
>            pthread_mutex_unlock(&mutex1);
>                      // save the original semaphore:
>            sem_t *sem1 = E.synchLock;
>                      // create and initialize a new semaphore.
>            // substitute it for the original:
>            sem_t sem2;
>            E.synchLock = &sem2;
>            sem_init(E.synchLock, 0, 0);
>                      // put the item in list2 for processing by thread2:
>            pthread_mutex_lock(&mutex2);
>            list2.push_back(E);
>            pthread_mutex_unlock(&mutex2);
>                      // Wait on, then destroy the substitute semaphore:
>            sem_wait(E.synchLock);
>            sem_destroy(E.synchLock);
>                      // put back and post on the original semaphore:
>            E.synchLock = sem1;
>            sem_post(E.synchLock);
>
>            printf("thread1: %d done\n", E.num);
>        }
>        Sleep(10);
>    }
> }
>
> // thread2 processes list2:
> void *thread2(void *arg) {
>    while (true) {
>        if (shutdownNow) {
>            shutdownDone2 = true;
>            pthread_exit(NULL);
>        }             pthread_mutex_lock(&mutex2);
>        if (list2.empty())
>            pthread_mutex_unlock(&mutex2);
>              else {
>            listElem E = list2.front();
>            list2.pop_front();
>            pthread_mutex_unlock(&mutex2);
>                      sem_post(E.synchLock);
>
>            printf("thread2: %d done\n", E.num);
>        }
>        Sleep(10);
>    }
> }
>
> const int COUNT = 1000;
>
> int main(int, char*[]) {
>    // Initialize flags:
>    shutdownNow = shutdownDone1 = shutdownDone2 = false;
>      // Pause to look at Task Manager.  Handles = 8:
>    Sleep(5000);
>
>    pthread_mutex_init(&mutex1, NULL);
>    pthread_mutex_init(&mutex2, NULL);
>      sem_t synchLocks[COUNT];
>      for (int index = 0; index < COUNT; ++index) {
>        sem_init(&synchLocks[index], 0, 0);
>        listElem E(index, &synchLocks[index]);
>        list1.push_back(E);
>    }
>
>    // Handles = 2019.  Starts to leak...
>
>    pthread_t T1;
>    pthread_create(&T1, NULL, thread1, NULL);  
>    pthread_t T2;
>    pthread_create(&T2, NULL, thread2, NULL);        while 
> (!list1.empty() || !list2.empty())
>        Sleep(10);
>      // Pause to look at Task Manager.  Handles = 2261 (varies):
>    Sleep(5000);
>      shutdownNow = true;
>    while (!shutdownDone1 && !shutdownDone2)
>        Sleep(10);
>
>    for (int index = 0; index < COUNT; ++index)
>        sem_destroy(&synchLocks[index]);
>
>    pthread_mutex_destroy(&mutex1);
>    pthread_mutex_destroy(&mutex2);
>
>    // Pause to look at Task Manager.  Handles = 264 (varies):
>    Sleep(5000);
>      printf("done\n");      return 0;
> }
>
>
>
>
>
>
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: semaphores and handle leaks
  2006-12-05 21:14     ` semaphores and handle leaks Morgan McLeod
@ 2006-12-05 23:12       ` Morgan McLeod
  2007-01-07  2:30         ` Ross Johnson
  0 siblings, 1 reply; 5+ messages in thread
From: Morgan McLeod @ 2006-12-05 23:12 UTC (permalink / raw)
  To: pthreads-win32

Hello again.

Below is C++ code for a fairly simple program which exhibits the 
apparent handle leaks I described in my previous posting.   I linked 
this with the standard STL rather than STLPort and it makes no 
difference.   This is compiled to an EXE, not a DLL like my real 
application.

Again, please feel free to point out if I'm doing somethign wrong.

Thanks

-Morgan McLeod
Software Engineer
National Radio Astronomy Observatory
Charlottesville, Va


#include <stdio.h>
#include <windows.h>
#include <pthread.h>
#include <semaphore.h>
#include <list>

struct listElem {
    int num;
    sem_t *synchLock;
   
    listElem(int _num, sem_t *_synchLock)
      : num(_num),
        synchLock(_synchLock)
        {}

    ~listElem()
      {}
};

typedef std::list<listElem> semList_t;
semList_t list1;
semList_t list2;

// mutexes to protect the lists:
pthread_mutex_t mutex1;
pthread_mutex_t mutex2;

// flags to tell the threads to stop:
bool shutdownNow;
bool shutdownDone1;
bool shutdownDone2;

// thread 1 processes list1:
void *thread1(void *arg) {
    while (true) {
        if (shutdownNow) {
            shutdownDone1 = true;
            pthread_exit(NULL);
        }      
        pthread_mutex_lock(&mutex1);
        if (list1.empty())
            pthread_mutex_unlock(&mutex1);
       
        else {       
            // remove the front element from the list:
            listElem E = list1.front();
            list1.pop_front();
            pthread_mutex_unlock(&mutex1);
           
            // save the original semaphore:
            sem_t *sem1 = E.synchLock;
           
            // create and initialize a new semaphore.
            // substitute it for the original:
            sem_t sem2;
            E.synchLock = &sem2;
            sem_init(E.synchLock, 0, 0);
           
            // put the item in list2 for processing by thread2:
            pthread_mutex_lock(&mutex2);
            list2.push_back(E);
            pthread_mutex_unlock(&mutex2);
           
            // Wait on, then destroy the substitute semaphore:
            sem_wait(E.synchLock);
            sem_destroy(E.synchLock);
           
            // put back and post on the original semaphore:
            E.synchLock = sem1;
            sem_post(E.synchLock);

            printf("thread1: %d done\n", E.num);
        }
        Sleep(10);
    }
}

// thread2 processes list2:
void *thread2(void *arg) {
    while (true) {
        if (shutdownNow) {
            shutdownDone2 = true;
            pthread_exit(NULL);
        }      
        pthread_mutex_lock(&mutex2);
        if (list2.empty())
            pthread_mutex_unlock(&mutex2);
       
        else {
            listElem E = list2.front();
            list2.pop_front();
            pthread_mutex_unlock(&mutex2);
           
            sem_post(E.synchLock);

            printf("thread2: %d done\n", E.num);
        }
        Sleep(10);
    }
}

const int COUNT = 1000;

int main(int, char*[]) {
    // Initialize flags:
    shutdownNow = shutdownDone1 = shutdownDone2 = false;
   
    // Pause to look at Task Manager.  Handles = 8:
    Sleep(5000);

    pthread_mutex_init(&mutex1, NULL);
    pthread_mutex_init(&mutex2, NULL);
   
    sem_t synchLocks[COUNT];
   
    for (int index = 0; index < COUNT; ++index) {
        sem_init(&synchLocks[index], 0, 0);
        listElem E(index, &synchLocks[index]);
        list1.push_back(E);
    }

    // Handles = 2019.  Starts to leak...

    pthread_t T1;
    pthread_create(&T1, NULL, thread1, NULL);   

    pthread_t T2;
    pthread_create(&T2, NULL, thread2, NULL);   
   
    while (!list1.empty() || !list2.empty())
        Sleep(10);
   
    // Pause to look at Task Manager.  Handles = 2261 (varies):
    Sleep(5000);
   
    shutdownNow = true;
    while (!shutdownDone1 && !shutdownDone2)
        Sleep(10);

    for (int index = 0; index < COUNT; ++index)
        sem_destroy(&synchLocks[index]);

    pthread_mutex_destroy(&mutex1);
    pthread_mutex_destroy(&mutex2);

    // Pause to look at Task Manager.  Handles = 264 (varies):
    Sleep(5000);
   
    printf("done\n");   
    return 0;
}






^ permalink raw reply	[flat|nested] 5+ messages in thread

* semaphores and handle leaks
  2006-11-29 11:09   ` Marcel Ruff
@ 2006-12-05 21:14     ` Morgan McLeod
  2006-12-05 23:12       ` Morgan McLeod
  0 siblings, 1 reply; 5+ messages in thread
From: Morgan McLeod @ 2006-12-05 21:14 UTC (permalink / raw)
  To: pthreads-win32

Hello all,

I've spent the last couple of days redesigning part of my application to 
work around what seems like a handle leak when using semaphores.   With 
my previous design they were leaking very rapidly.  With my new design 
it is much slower but still troubling.   I'll give the gist of my 
application here so if I'm doing something obviously wrong maybe one of 
you can point it out to me.  Then I'll go back to trying to make a small 
sample program which exhibits the bug.

My application is a DLL to be called from a LabVIEW application or from 
a C or C++ test program.

I'm using GCC and MinGW32:

L:\cpp\FrontEndControl2>g++ -v
Reading specs from C:/system/mingw/bin/../lib/gcc/mingw32/3.4.2/specs
Configured with: ../gcc/configure --with-gcc --with-gnu-ld --with-gnu-as 
--host=mingw32 --target=mingw32 --prefix=/mingw --enable-threads 
--disable-nls --enable-languages=c,c++,f77,ada,objc,java 
--disable-win32-registry --disable-shared --enable-sjlj-exceptions 
--enable-libgcj --disable-java-awt --without-x --enable-java-gc=boehm 
--disable-libgcj-debug --enable-interpreter 
--enable-hash-synchronization --enable-libstdcxx-debug
Thread model: win32
gcc version 3.4.2 (mingw-special)

I've got the lastest pthreadGC2.dll, and libpthreadGC2.a from 
http://sources.redhat.com/pthreads-win32/

There are several "monitor" threads.  Each one creates semaphores on the 
stack, around 5-15 times every 100 ms:

    sem_t synchLock;
    sem_init(&synchLock, 0, 0);
    monitor(AMBSI_TEMPERATURE, dataLength, data, &synchLock, &timestamp, 
&status);
    sem_wait(&synchLock);
    sem_destroy(&synchLock);   

The pointer to the semaphore is put along with other data in a queue.

In my original design, a new thread would be launched for every item in 
the queue.  These threads would save the pointer to the caller's 
semaphore, create a new one on the local stack, substitute it for the 
caller's, and then add the data to a queue for transmission on a CAN 
bus.  Once it has been sent, the CAN bus handler will sem_post:

        // Create a unique semaphore sem2:
        sem_t sem2;
        sem_init(&sem2, 0, 0);
       
        // Substitute sem2 for the sempaphore in the caller's completion 
structure:
        sem_t *sem1 = msg.completion_p -> synchLock_p;
        msg.completion_p -> synchLock_p = &sem2;

        // Send the message:
        canBus_mp -> sendMessage(msg);
       
        // Wait on sem2:
        sem_wait(&sem2);
        sem_destroy(&sem2);

        // [ make a local copy of the data]

        // Put back the caller's semaphore, if any, and signal on 
it:       
        msg.completion_p -> synchLock_p = sem1;
        if (sem1)
            sem_post(sem1);

       // [ log the transaction ]

The idea here is that this thread can take all the time it needs to log 
the transaction to a database without holding up the caller's thread.   
As I said, this design was leaking handles at a rapid clip.  It seemed 
like 2 handles per message were leaked -- hundreds every second.   Using 
gdb I traced the leaks to happening in sem_init calls.

Since creating all those threads was kind of a dumb design, I've changed 
it to a more conventional design.   Now instead of one thread per 
message, there is a single worker thread and a circular buffer for 
holding the messages.  It still is working in basically the same way, 
though.  A fixed number of semaphores are preallocated and sem_init-ed 
by in the buffer.   These are substituted for the caller's semaphore as 
above.

This design still leaks handles, only much slower.   At full load of 
 >300 messages / second, it leaks 6 to 10 handles per second.   At a 
reduced load of 15 messages every 5 seconds, it leaks 2 handles every 30 
seconds or so.

Does anything jump out as being wrong that I'm doing?   I'll try to get 
a simple test program that demonstrates this sometime in the next few days.

Thanks for your consideration.

-Morgan McLeod
Software Engineer
National Radio Astronomy Observatory
Charlottesville, Va


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2007-01-08 14:31 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2006-12-05 21:25 semaphores and handle leaks Ye Liu
  -- strict thread matches above, loose matches on Subject: below --
2006-11-28 16:22 Patch of current cvs for WinCE Marcel Ruff
2006-11-28 16:27 ` Marcel Ruff
2006-11-29 11:09   ` Marcel Ruff
2006-12-05 21:14     ` semaphores and handle leaks Morgan McLeod
2006-12-05 23:12       ` Morgan McLeod
2007-01-07  2:30         ` Ross Johnson
2007-01-08 14:31           ` Morgan McLeod

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).