public inbox for ecos-discuss@sourceware.org
 help / color / mirror / Atom feed
* [ECOS] question on eCos mutex behavior
@ 2007-04-25 19:21 David Hill
  2007-04-25 19:39 ` Gary Thomas
  2007-04-25 20:51 ` Andrew Lunn
  0 siblings, 2 replies; 8+ messages in thread
From: David Hill @ 2007-04-25 19:21 UTC (permalink / raw)
  To: ecos-discuss

Hi.

I'm experiencing some unexpected behavior using mutexes under eCos and I'm wondering if this is just the way it works or if I might be missing something.  I created a simple test case below to illustrate the behavior - I have two threads that are both running at the same priority and always trying to get a mutex lock.  The only difference between them is that I guaranteed that the first would win the first time by inserting an artificial delay into the 2nd thread. I expected that when the 1st thread unlocks the mutex, and tries to take it again, the cyg_mutex_lock() function would hang because there is already another thread pending on that mutex.  However, what I'm seeing is that the 1st thread continues to succeed over and over again and the 2nd thread gets starved, so I see lots of "LOCK0" statements and no "LOCK1" statements.  If I uncomment the 'cyg_thread_delay(1)' statement after the mutex is unlocked, then I get the nice ping-pong effect I was expecting, but I can't really use that workaround for my application.

If this is expected behavior, then is there a different mutual exclusion primitive that will provide ordering?

If this is unexpected behavior, then are there some kernel parameters that might explain this?

Thanks,
Dave Hill
AirDefense, Inc.


static cyg_mutex_t         TestMutex;

static void testthread(cyg_addrword_t param)
{
    if (param == 0)
    {
        cyg_mutex_init(&TestMutex) ;
    }
    else
    {
        cyg_thread_delay(50);
    }

    while (cyg_mutex_lock(&TestMutex) == true)
    {
        diag_printf("LOCK%d\n", (int)param);
        cyg_thread_delay(100);
        cyg_mutex_unlock(&TestMutex);
//        cyg_thread_delay(1);
    }

}

static void startTestThread(int num, uint8_t *stack,
                            uint16_t stacksize, cyg_thread *threadData)
{
    cyg_handle_t handle;
    cyg_thread_create(10,
                      testthread,
                      num,
                      "tt",
                      stack,
                      stacksize,
                      &handle,
                      threadData);
    cyg_thread_resume(handle);
}
static void cliTestMutexCmd(int socket, CLI_INPUT *in)
{
    static uint8_t stack1[1024];
    static uint8_t stack2[1024];
    static cyg_thread thread1;
    static cyg_thread thread2;

    startTestThread(0, stack1, 1024, &thread1);
    startTestThread(1, stack2, 1024, &thread2);
}



--
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [ECOS] question on eCos mutex behavior
  2007-04-25 19:21 [ECOS] question on eCos mutex behavior David Hill
@ 2007-04-25 19:39 ` Gary Thomas
  2007-04-25 20:37   ` Andrew Lunn
  2007-04-25 20:51 ` Andrew Lunn
  1 sibling, 1 reply; 8+ messages in thread
From: Gary Thomas @ 2007-04-25 19:39 UTC (permalink / raw)
  To: David Hill; +Cc: ecos-discuss

David Hill wrote:
> Hi.
> 
> I'm experiencing some unexpected behavior using mutexes under eCos and I'm wondering if this is just the way it works or if I might be missing something.  I created a simple test case below to illustrate the behavior - I have two threads that are both running at the same priority and always trying to get a mutex lock.  The only difference between them is that I guaranteed that the first would win the first time by inserting an artificial delay into the 2nd thread. I expected that when the 1st thread unlocks the mutex, and tries to take it again, the cyg_mutex_lock() function would hang because there is already another thread pending on that mutex.  However, what I'm seeing is that the 1st thread continues to succeed over and over again and the 2nd thread gets starved, so I see lots of "LOCK0" statements and no "LOCK1" statements.  If I uncomment the 'cyg_thread_delay(1)' statement after the mutex is unlocked, then I get the nice ping-pong effect I was expecting, but I can't
 really use that workaround for my application.
> 
> If this is expected behavior, then is there a different mutual exclusion primitive that will provide ordering?
> 
> If this is unexpected behavior, then are there some kernel parameters that might explain this?
> 
> Thanks,
> Dave Hill
> AirDefense, Inc.
> 
> 
> static cyg_mutex_t         TestMutex;
> 
> static void testthread(cyg_addrword_t param)
> {
>     if (param == 0)
>     {
>         cyg_mutex_init(&TestMutex) ;
>     }
>     else
>     {
>         cyg_thread_delay(50);
>     }
> 
>     while (cyg_mutex_lock(&TestMutex) == true)
>     {
>         diag_printf("LOCK%d\n", (int)param);
>         cyg_thread_delay(100);
>         cyg_mutex_unlock(&TestMutex);
> //        cyg_thread_delay(1);
>     }
> 
> }
> 
> static void startTestThread(int num, uint8_t *stack,
>                             uint16_t stacksize, cyg_thread *threadData)
> {
>     cyg_handle_t handle;
>     cyg_thread_create(10,
>                       testthread,
>                       num,
>                       "tt",
>                       stack,
>                       stacksize,
>                       &handle,
>                       threadData);
>     cyg_thread_resume(handle);
> }
> static void cliTestMutexCmd(int socket, CLI_INPUT *in)
> {
>     static uint8_t stack1[1024];
>     static uint8_t stack2[1024];
>     static cyg_thread thread1;
>     static cyg_thread thread2;
> 
>     startTestThread(0, stack1, 1024, &thread1);
>     startTestThread(1, stack2, 1024, &thread2);
> }
> 
> 

Look carefully at your code - the second thread is most
likely going to enter, then try to get the mutex which
will fail and then exit.  No more second thread, thus
no "LOCK1" messages.


-- 
------------------------------------------------------------
Gary Thomas                 |  Consulting for the
MLB Associates              |    Embedded world
------------------------------------------------------------

-- 
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [ECOS] question on eCos mutex behavior
  2007-04-25 19:39 ` Gary Thomas
@ 2007-04-25 20:37   ` Andrew Lunn
  0 siblings, 0 replies; 8+ messages in thread
From: Andrew Lunn @ 2007-04-25 20:37 UTC (permalink / raw)
  To: Gary Thomas; +Cc: David Hill, ecos-discuss

On Wed, Apr 25, 2007 at 01:39:39PM -0600, Gary Thomas wrote:
> David Hill wrote:
> >Hi.
> >
> >I'm experiencing some unexpected behavior using mutexes under eCos and I'm 
> >wondering if this is just the way it works or if I might be missing 
> >something.  I created a simple test case below to illustrate the behavior 
> >- I have two threads that are both running at the same priority and always 
> >trying to get a mutex lock.  The only difference between them is that I 
> >guaranteed that the first would win the first time by inserting an 
> >artificial delay into the 2nd thread. I expected that when the 1st thread 
> >unlocks the mutex, and tries to take it again, the cyg_mutex_lock() 
> >function would hang because there is already another thread pending on 
> >that mutex.  However, what I'm seeing is that the 1st thread continues to 
> >succeed over and over again and the 2nd thread gets starved, so I see lots 
> >of "LOCK0" statements and no "LOCK1" statements.  If I uncomment the 
> >'cyg_thread_delay(1)' statement after the mutex is unlocked, then I get 
> >the nice ping-pong effect I was expecting, but I can't
> really use that workaround for my application.
> >
> >If this is expected behavior, then is there a different mutual exclusion 
> >primitive that will provide ordering?
> >
> >If this is unexpected behavior, then are there some kernel parameters that 
> >might explain this?
> >
> >Thanks,
> >Dave Hill
> >AirDefense, Inc.
> >
> >
> >static cyg_mutex_t         TestMutex;
> >
> >static void testthread(cyg_addrword_t param)
> >{
> >    if (param == 0)
> >    {
> >        cyg_mutex_init(&TestMutex) ;
> >    }
> >    else
> >    {
> >        cyg_thread_delay(50);
> >    }
> >
> >    while (cyg_mutex_lock(&TestMutex) == true)
> >    {
> >        diag_printf("LOCK%d\n", (int)param);
> >        cyg_thread_delay(100);
> >        cyg_mutex_unlock(&TestMutex);
> >//        cyg_thread_delay(1);
> >    }
> >
> >}
> >
> >static void startTestThread(int num, uint8_t *stack,
> >                            uint16_t stacksize, cyg_thread *threadData)
> >{
> >    cyg_handle_t handle;
> >    cyg_thread_create(10,
> >                      testthread,
> >                      num,
> >                      "tt",
> >                      stack,
> >                      stacksize,
> >                      &handle,
> >                      threadData);
> >    cyg_thread_resume(handle);
> >}
> >static void cliTestMutexCmd(int socket, CLI_INPUT *in)
> >{
> >    static uint8_t stack1[1024];
> >    static uint8_t stack2[1024];
> >    static cyg_thread thread1;
> >    static cyg_thread thread2;
> >
> >    startTestThread(0, stack1, 1024, &thread1);
> >    startTestThread(1, stack2, 1024, &thread2);
> >}
> >
> >
> 
> Look carefully at your code - the second thread is most
> likely going to enter, then try to get the mutex which
> will fail and then exit.  No more second thread, thus
> no "LOCK1" messages.

Hi Gary.

I don't follow what you are saying. cyg_mutex_lock only fails if
cyg_mutex_release or cyg_thread_release is called. So i don't see why
the whole loop should exit with this code.

    Andrew

-- 
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [ECOS] question on eCos mutex behavior
  2007-04-25 19:21 [ECOS] question on eCos mutex behavior David Hill
  2007-04-25 19:39 ` Gary Thomas
@ 2007-04-25 20:51 ` Andrew Lunn
  2007-04-25 21:08   ` David Hill
  2007-04-25 21:17   ` Andrew Lunn
  1 sibling, 2 replies; 8+ messages in thread
From: Andrew Lunn @ 2007-04-25 20:51 UTC (permalink / raw)
  To: David Hill; +Cc: ecos-discuss

On Wed, Apr 25, 2007 at 03:20:58PM -0400, David Hill wrote:

> I'm experiencing some unexpected behavior using mutexes under eCos
> and I'm wondering if this is just the way it works or if I might be
> missing something.? I created a simple test case below to illustrate
> the behavior - I have two threads that are both running at the same
> priority and always trying to get a mutex lock.? The only difference
> between them is that I guaranteed that the first would win the first
> time by inserting an artificial delay into the 2nd thread. I
> expected that when the 1st thread unlocks the mutex, and tries to
> take it again, the cyg_mutex_lock() function would hang because
> there is already another thread pending on that mutex.?

You can see the code in packages/kernel/current/src/sync/mutex.cxx

The unlock call in thread1 just marks thread2 as runnable. Since both
threads are running with the same priority, there will not be a
preemption. So thread1 loops around and since the mutex is not locked
yet, it locks it.

When you add the delay, thread2 gets chance to run after being made
runnable via the unlock. It then claims the lock and you get the ping
pong behaviour.

In order for mutex's to work as you thought they should work, it would
be necessary for the unlock call to actually re-lock the mutex in the
name of thread2. This seems a bit ugly to me. 

> However,
> what I'm seeing is that the 1st thread continues to succeed over and
> over again and the 2nd thread gets starved, so I see lots of "LOCK0"
> statements and no "LOCK1" statements.? If I uncomment the
> 'cyg_thread_delay(1)' statement after the mutex is unlocked, then I
> get the nice ping-pong effect I was expecting, but I can't really
> use that workaround for my application.

You could also use cyg_thread_yield(). 

> If this is expected behavior, then is there a different mutual
> exclusion primitive that will provide ordering?

You could do something like:
    
for (;;) {
    cyg_mutex_lock()
    cyg_thread_set_priority(cyg_thread_get_priority()-1);
    work();
    cyg_mutex_unlock
    cyg_thread_set_priority(cyg_thread_get_priority()+1);
}

but cyg_thread_yield() seems like a cleaner solution.

    Andrew
    

-- 
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: [ECOS] question on eCos mutex behavior
  2007-04-25 20:51 ` Andrew Lunn
@ 2007-04-25 21:08   ` David Hill
  2007-04-25 21:28     ` Andrew Lunn
  2007-04-25 21:17   ` Andrew Lunn
  1 sibling, 1 reply; 8+ messages in thread
From: David Hill @ 2007-04-25 21:08 UTC (permalink / raw)
  To: Andrew Lunn; +Cc: ecos-discuss

Thank you very much, Andrew.  Cyg_thread_yield() gives me a good
workaround.

Do any of the eCos synchronization primitives provide FIFO ordering?

Thanks,
Dave Hill
AirDefense, Inc

-----Original Message-----
From: ecos-discuss-owner@ecos.sourceware.org
[mailto:ecos-discuss-owner@ecos.sourceware.org] On Behalf Of Andrew Lunn
Sent: Wednesday, April 25, 2007 4:52 PM
To: David Hill
Cc: ecos-discuss@ecos.sourceware.org
Subject: Re: [ECOS] question on eCos mutex behavior

On Wed, Apr 25, 2007 at 03:20:58PM -0400, David Hill wrote:

> I'm experiencing some unexpected behavior using mutexes under eCos
> and I'm wondering if this is just the way it works or if I might be
> missing something.? I created a simple test case below to illustrate
> the behavior - I have two threads that are both running at the same
> priority and always trying to get a mutex lock.? The only difference
> between them is that I guaranteed that the first would win the first
> time by inserting an artificial delay into the 2nd thread. I
> expected that when the 1st thread unlocks the mutex, and tries to
> take it again, the cyg_mutex_lock() function would hang because
> there is already another thread pending on that mutex.?

You can see the code in packages/kernel/current/src/sync/mutex.cxx

The unlock call in thread1 just marks thread2 as runnable. Since both
threads are running with the same priority, there will not be a
preemption. So thread1 loops around and since the mutex is not locked
yet, it locks it.

When you add the delay, thread2 gets chance to run after being made
runnable via the unlock. It then claims the lock and you get the ping
pong behaviour.

In order for mutex's to work as you thought they should work, it would
be necessary for the unlock call to actually re-lock the mutex in the
name of thread2. This seems a bit ugly to me. 

> However,
> what I'm seeing is that the 1st thread continues to succeed over and
> over again and the 2nd thread gets starved, so I see lots of "LOCK0"
> statements and no "LOCK1" statements.? If I uncomment the
> 'cyg_thread_delay(1)' statement after the mutex is unlocked, then I
> get the nice ping-pong effect I was expecting, but I can't really
> use that workaround for my application.

You could also use cyg_thread_yield(). 

> If this is expected behavior, then is there a different mutual
> exclusion primitive that will provide ordering?

You could do something like:
    
for (;;) {
    cyg_mutex_lock()
    cyg_thread_set_priority(cyg_thread_get_priority()-1);
    work();
    cyg_mutex_unlock
    cyg_thread_set_priority(cyg_thread_get_priority()+1);
}

but cyg_thread_yield() seems like a cleaner solution.

    Andrew
    

-- 
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss


--
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [ECOS] question on eCos mutex behavior
  2007-04-25 20:51 ` Andrew Lunn
  2007-04-25 21:08   ` David Hill
@ 2007-04-25 21:17   ` Andrew Lunn
  1 sibling, 0 replies; 8+ messages in thread
From: Andrew Lunn @ 2007-04-25 21:17 UTC (permalink / raw)
  To: David Hill, ecos-discuss

> You could do something like:
>     
> for (;;) {
>     cyg_mutex_lock()
>     cyg_thread_set_priority(cyg_thread_get_priority()-1);
>     work();
>     cyg_mutex_unlock
>     cyg_thread_set_priority(cyg_thread_get_priority()+1);
> }

Duh! Forget that, it does not work!

     Andrew

-- 
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [ECOS] question on eCos mutex behavior
  2007-04-25 21:08   ` David Hill
@ 2007-04-25 21:28     ` Andrew Lunn
  2007-04-26  9:44       ` Nick Garnett
  0 siblings, 1 reply; 8+ messages in thread
From: Andrew Lunn @ 2007-04-25 21:28 UTC (permalink / raw)
  To: David Hill; +Cc: Andrew Lunn, ecos-discuss

On Wed, Apr 25, 2007 at 05:08:30PM -0400, David Hill wrote:
> Thank you very much, Andrew.  Cyg_thread_yield() gives me a good
> workaround.
> 
> Do any of the eCos synchronization primitives provide FIFO ordering?

When CYGIMP_KERNEL_SCHED_SORTED_QUEUES is enabled all thread queuing
when blocking is FIFO. However, as you have seen, this does not
actually mean the head of the queue gets the mutex next time. It just
controls which thread gets made runnable.

The best rule of thumb i can give is, don't expect equal priority
threads to be fare to each other.

        Andrew


-- 
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [ECOS] question on eCos mutex behavior
  2007-04-25 21:28     ` Andrew Lunn
@ 2007-04-26  9:44       ` Nick Garnett
  0 siblings, 0 replies; 8+ messages in thread
From: Nick Garnett @ 2007-04-26  9:44 UTC (permalink / raw)
  To: Andrew Lunn; +Cc: David Hill, ecos-discuss

Andrew Lunn <andrew@lunn.ch> writes:

> On Wed, Apr 25, 2007 at 05:08:30PM -0400, David Hill wrote:
> > Thank you very much, Andrew.  Cyg_thread_yield() gives me a good
> > workaround.
> > 
> > Do any of the eCos synchronization primitives provide FIFO ordering?
> 
> When CYGIMP_KERNEL_SCHED_SORTED_QUEUES is enabled all thread queuing
> when blocking is FIFO. However, as you have seen, this does not
> actually mean the head of the queue gets the mutex next time. It just
> controls which thread gets made runnable.
> 
> The best rule of thumb i can give is, don't expect equal priority
> threads to be fare to each other.

To elaborate a bit more, the behaviour of mutexes, and other
synchronization primitives, was a deliberate design choice. The
general approach has been that when a thread needs to wait on a
primitive, it wakes up in the same state in which it slept, and must
therefore re-test the condition that causes it to sleep, and either
continue or go back to sleep. This is true of mutexes, semaphore,
message queues, flags etc.

This approach was taken to avoid certain artefacts that could occur if
the lock were handed off directly to the first waiter. Consider the
situation where the owner of a mutex is a high priority thread, and
the first thread on the queue low priority. Now add a medium priority
thread that will attempt to lock the mutex once it is allowed to run
when the high priority thread sleeps. With direct handoff, the high
priority thread will hand the mutex to the low priority thread, but
once it sleeps the medium priority thread will be blocked on the mutex
until the low priority thread is done, resulting in a form of priority
inversion. With the current mechanism, when the high priority thread
sleeps both the low and medium priority threads are ready to run, and
the medium priority thread preempts and wins the mutex.

Letting competing threads fight it out in the scheduler according to
their priorities is a much fairer way of resolving conflicts of this
sort. It does occasionally cause slightly weird effects like the one
that the OP observed. However, this is a somewhat artificial example
since in the real world thread don't generally loop continually
claiming and releasing a mutex.


Also note CYGIMP_KERNEL_SCHED_SORTED_QUEUES causes the thread queues
to be sorted according to priority. Disabling this option causes
thread queues to be stored in FIFO order.


-- 
Nick Garnett                                     eCos Kernel Architect
eCosCentric Limited     http://www.eCosCentric.com/   The eCos experts
Barnwell House, Barnwell Drive, Cambridge, UK.    Tel: +44 1223 245571
Registered in England and Wales: Reg No 4422071.


-- 
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2007-04-26  9:44 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-04-25 19:21 [ECOS] question on eCos mutex behavior David Hill
2007-04-25 19:39 ` Gary Thomas
2007-04-25 20:37   ` Andrew Lunn
2007-04-25 20:51 ` Andrew Lunn
2007-04-25 21:08   ` David Hill
2007-04-25 21:28     ` Andrew Lunn
2007-04-26  9:44       ` Nick Garnett
2007-04-25 21:17   ` Andrew Lunn

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).