From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 6007 invoked by alias); 15 Jun 2012 02:44:51 -0000 Received: (qmail 5987 invoked by uid 22791); 15 Jun 2012 02:44:49 -0000 X-SWARE-Spam-Status: No, hits=-2.9 required=5.0 tests=AWL,BAYES_00,KHOP_THREADED,TW_PX,T_RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from usmamail.tilera.com (HELO USMAMAIL.TILERA.COM) (12.216.194.151) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Fri, 15 Jun 2012 02:44:36 +0000 Received: from [192.168.1.104] (24.34.76.130) by USMAExch2.tad.internal.tilera.com (10.3.0.33) with Microsoft SMTP Server id 14.0.694.0; Thu, 14 Jun 2012 22:44:34 -0400 Message-ID: <4FDAA190.3050706@tilera.com> Date: Fri, 15 Jun 2012 02:44:00 -0000 From: Chris Metcalf User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:13.0) Gecko/20120604 Thunderbird/13.0 MIME-Version: 1.0 To: Maxim Kuvyrkov CC: "Joseph S. Myers" , GLIBC Devel , , Tom de Vries Subject: Re: [PATCH] Optimize libc_lock_lock for MIPS XLP. References: <4FD9DB74.8080905@tilera.com> <40CBC472-71CC-4FF3-A452-073B76701215@codesourcery.com> In-Reply-To: <40CBC472-71CC-4FF3-A452-073B76701215@codesourcery.com> Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: 7bit X-IsSubscribed: yes Mailing-List: contact libc-ports-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Post: List-Help: , Sender: libc-ports-owner@sourceware.org X-SW-Source: 2012-06/txt/msg00042.txt.bz2 On 6/14/2012 9:20 PM, Maxim Kuvyrkov wrote: > On 15/06/2012, at 12:39 AM, Chris Metcalf wrote: > >> On 6/14/2012 1:03 AM, Maxim Kuvyrkov wrote: >>> These two patches (libc part and ports part) optimize libc_lock_lock() macro that GLIBC uses for locking internally to take advantage of fetch_and_add instruction that is available as an extension on certain processors, e.g., MIPS-architecture XLP. >>> >>> The libc_lock_lock macros implement boolean lock: 0 corresponds to unlocked state and non-zero corresponds to locked state. >> Just to be clear, if you put this comment somewhere when you commit, you >> should say locks are tristate, where 0 is unlocked, 1 is locked and >> uncontended, and >1 is locked and contended. > Right, it's all coming back now. I will update the comments to mention this. [This optimization was written around 6 months ago, and not by me. This and below points are worth elaborating on, thanks for bringing them up.] > > I've CC'ed Tom de Vries, who is the original author of patch. Tom, please let us know if I'm misrepresenting the optimization or the rationale for its correctness. > >>> It is, therefore, possible to use fetch_and_add semantics to acquire lock in libc_lock_lock. For XLP this translates to a single LDADD instruction. This optimization allows architectures that can perform fetch_and_add faster than compare_and_exchange, such situation is indicated by defining the new macro "lll_add_lock". >>> >>> The unlocking counterpart doesn't require any change as it is already uses plain atomic_exchange operation, which, incidentally, also supported on XLP as a single instruction. >> This seems like it would work well for a single thread acquiring the lock, >> but I have some questions about it in the presence of multiple threads >> trying to acquire the lock. >> >> First, the generic __lll_lock_wait() code assumes the contended value is >> exactly "2". > Um, not exactly. __lll_lock_wait() *sets* the contended lock to a value of "2", but it will work as well with >2 values. > > void > __lll_lock_wait (int *futex, int private) > { > if (*futex == 2) > lll_futex_wait (futex, 2, private); > > while (atomic_exchange_acq (futex, 2) != 0) > lll_futex_wait (futex, 2, private); > } > >> So if two or more threads both try and fail to acquire the >> lock, the value will be >2. This will cause the waiters to busywait, >> spinning on atomic exchange instructions, rather than calling into >> futex_wait(). > As I read it, in case of a contended lock __lll_lock_wait will reset the value of the lock to "2" before calling lll_futex_wait(). I agree that there is a timing window in which the other threads will see a value of the lock greater than "2", but the value will not get as high as hundreds or billions as it will be constantly reset to "2" in atomic_exchange in lll_lock_wait(). > > I do not see how threads will get into a busywait state, though. Would you please elaborate on that? You are correct. I was thinking the that the while loop had a cmpxchg, but since it's just a straight-up exchange, the flow will be something like: - Fail to initially call lll_futex_wait() if the lock is contended - Fall through to while loop - Spin as long as the lock is contended enough that *futex > 2 - Enter futex_wait So a little busy under high contention, but probably settles out reasonably well. Since Tilera makes chips with 64 cores I tend to worry more about spinning race cases with a lot of cores contending at once :-) >> I think it might be possible to change the generic code to >> support the more general ">1" semantics of contended locks, but it might be >> a bit less efficient, so you might end up wanting to provide overrides for >> these functions on MIPS. Even on MIPS it might result in a certain amount >> of spinning since you'd have to hit the race window correctly to feed the >> right value of the lock to futex_wait. >> >> Second, if a lock is held long enough for 4 billion threads to try to >> acquire it and fail, you will end up with an unlocked lock. :-) I'm not >> sure how likely this seems, but it is a potential issue. You might >> consider, for example, doing a cmpxchg on the contended-lock path to try to >> reset the lock value back to 2 again; if it fails, it's not a big deal, >> since statistically I would expect the occasional thread to succeed, which >> is all you need. > Thank you, > > -- > Maxim Kuvyrkov > CodeSourcery / Mentor Graphics -- Chris Metcalf, Tilera Corp. http://www.tilera.com