From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 30075 invoked by alias); 25 Apr 2007 20:51:44 -0000 Received: (qmail 30062 invoked by uid 22791); 25 Apr 2007 20:51:43 -0000 X-Spam-Check-By: sourceware.org Received: from londo.lunn.ch (HELO londo.lunn.ch) (80.238.139.98) by sourceware.org (qpsmtpd/0.31) with ESMTP; Wed, 25 Apr 2007 21:51:41 +0100 Received: from lunn by londo.lunn.ch with local (Exim 3.36 #1 (Debian)) id 1HgoSg-0003kE-00; Wed, 25 Apr 2007 22:51:38 +0200 Date: Wed, 25 Apr 2007 20:51:00 -0000 To: David Hill Cc: ecos-discuss@ecos.sourceware.org Message-ID: <20070425205138.GC4336@lunn.ch> Mail-Followup-To: David Hill , ecos-discuss@ecos.sourceware.org References: <6F6F9AF887E54B47961CA0F5D1C228D1021FA3F6@shake.airdefense.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <6F6F9AF887E54B47961CA0F5D1C228D1021FA3F6@shake.airdefense.net> User-Agent: Mutt/1.5.13 (2006-08-11) From: Andrew Lunn X-IsSubscribed: yes Mailing-List: contact ecos-discuss-help@ecos.sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: ecos-discuss-owner@ecos.sourceware.org Subject: Re: [ECOS] question on eCos mutex behavior X-SW-Source: 2007-04/txt/msg00136.txt.bz2 On Wed, Apr 25, 2007 at 03:20:58PM -0400, David Hill wrote: > I'm experiencing some unexpected behavior using mutexes under eCos > and I'm wondering if this is just the way it works or if I might be > missing something.? I created a simple test case below to illustrate > the behavior - I have two threads that are both running at the same > priority and always trying to get a mutex lock.? The only difference > between them is that I guaranteed that the first would win the first > time by inserting an artificial delay into the 2nd thread. I > expected that when the 1st thread unlocks the mutex, and tries to > take it again, the cyg_mutex_lock() function would hang because > there is already another thread pending on that mutex.? You can see the code in packages/kernel/current/src/sync/mutex.cxx The unlock call in thread1 just marks thread2 as runnable. Since both threads are running with the same priority, there will not be a preemption. So thread1 loops around and since the mutex is not locked yet, it locks it. When you add the delay, thread2 gets chance to run after being made runnable via the unlock. It then claims the lock and you get the ping pong behaviour. In order for mutex's to work as you thought they should work, it would be necessary for the unlock call to actually re-lock the mutex in the name of thread2. This seems a bit ugly to me. > However, > what I'm seeing is that the 1st thread continues to succeed over and > over again and the 2nd thread gets starved, so I see lots of "LOCK0" > statements and no "LOCK1" statements.? If I uncomment the > 'cyg_thread_delay(1)' statement after the mutex is unlocked, then I > get the nice ping-pong effect I was expecting, but I can't really > use that workaround for my application. You could also use cyg_thread_yield(). > If this is expected behavior, then is there a different mutual > exclusion primitive that will provide ordering? You could do something like: for (;;) { cyg_mutex_lock() cyg_thread_set_priority(cyg_thread_get_priority()-1); work(); cyg_mutex_unlock cyg_thread_set_priority(cyg_thread_get_priority()+1); } but cyg_thread_yield() seems like a cleaner solution. Andrew -- Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss