From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 14636 invoked by alias); 14 Jun 2012 12:39:41 -0000 Received: (qmail 14512 invoked by uid 22791); 14 Jun 2012 12:39:39 -0000 X-SWARE-Spam-Status: No, hits=-3.0 required=5.0 tests=AWL,BAYES_00,KHOP_THREADED,TW_PX,T_RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from usmamail.tilera.com (HELO USMAMAIL.TILERA.COM) (12.216.194.151) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Thu, 14 Jun 2012 12:39:20 +0000 Received: from [192.168.1.104] (24.34.76.130) by USMAExch2.tad.internal.tilera.com (10.3.0.33) with Microsoft SMTP Server id 14.0.694.0; Thu, 14 Jun 2012 08:39:18 -0400 Message-ID: <4FD9DB74.8080905@tilera.com> Date: Thu, 14 Jun 2012 12:39:00 -0000 From: Chris Metcalf User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:13.0) Gecko/20120604 Thunderbird/13.0 MIME-Version: 1.0 To: Maxim Kuvyrkov CC: "Joseph S. Myers" , GLIBC Devel , Subject: Re: [PATCH] Optimize libc_lock_lock for MIPS XLP. References: In-Reply-To: Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: 7bit X-IsSubscribed: yes Mailing-List: contact libc-ports-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Post: List-Help: , Sender: libc-ports-owner@sourceware.org X-SW-Source: 2012-06/txt/msg00036.txt.bz2 On 6/14/2012 1:03 AM, Maxim Kuvyrkov wrote: > These two patches (libc part and ports part) optimize libc_lock_lock() macro that GLIBC uses for locking internally to take advantage of fetch_and_add instruction that is available as an extension on certain processors, e.g., MIPS-architecture XLP. > > The libc_lock_lock macros implement boolean lock: 0 corresponds to unlocked state and non-zero corresponds to locked state. Just to be clear, if you put this comment somewhere when you commit, you should say locks are tristate, where 0 is unlocked, 1 is locked and uncontended, and >1 is locked and contended. > It is, therefore, possible to use fetch_and_add semantics to acquire lock in libc_lock_lock. For XLP this translates to a single LDADD instruction. This optimization allows architectures that can perform fetch_and_add faster than compare_and_exchange, such situation is indicated by defining the new macro "lll_add_lock". > > The unlocking counterpart doesn't require any change as it is already uses plain atomic_exchange operation, which, incidentally, also supported on XLP as a single instruction. This seems like it would work well for a single thread acquiring the lock, but I have some questions about it in the presence of multiple threads trying to acquire the lock. First, the generic __lll_lock_wait() code assumes the contended value is exactly "2". So if two or more threads both try and fail to acquire the lock, the value will be >2. This will cause the waiters to busywait, spinning on atomic exchange instructions, rather than calling into futex_wait(). I think it might be possible to change the generic code to support the more general ">1" semantics of contended locks, but it might be a bit less efficient, so you might end up wanting to provide overrides for these functions on MIPS. Even on MIPS it might result in a certain amount of spinning since you'd have to hit the race window correctly to feed the right value of the lock to futex_wait. Second, if a lock is held long enough for 4 billion threads to try to acquire it and fail, you will end up with an unlocked lock. :-) I'm not sure how likely this seems, but it is a potential issue. You might consider, for example, doing a cmpxchg on the contended-lock path to try to reset the lock value back to 2 again; if it fails, it's not a big deal, since statistically I would expect the occasional thread to succeed, which is all you need. -- Chris Metcalf, Tilera Corp. http://www.tilera.com