From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 4674 invoked by alias); 25 Jul 2012 19:04:03 -0000 Received: (qmail 4656 invoked by uid 22791); 25 Jul 2012 19:04:02 -0000 X-SWARE-Spam-Status: No, hits=-3.0 required=5.0 tests=AWL,BAYES_00,KHOP_THREADED,TW_PX,T_RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from usmamail.tilera.com (HELO USMAMAIL.TILERA.COM) (12.216.194.151) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Wed, 25 Jul 2012 19:03:48 +0000 Received: from [10.7.0.95] (10.9.0.23) by USMAExch2.tad.internal.tilera.com (10.3.0.33) with Microsoft SMTP Server id 14.0.694.0; Wed, 25 Jul 2012 15:03:47 -0400 Message-ID: <50104312.70205@tilera.com> Date: Wed, 25 Jul 2012 19:04:00 -0000 From: Chris Metcalf User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:14.0) Gecko/20120713 Thunderbird/14.0 MIME-Version: 1.0 To: Roland McGrath CC: Maxim Kuvyrkov , Andrew Haley , David Miller , , , , Subject: Re: [PATCH] Unify pthread_spin_[try]lock implementations. References: <65B470D2-4D01-4BA1-AEC5-A72C0006EA22@codesourcery.com> <20120711081441.73BB22C093@topped-with-meat.com> <20120711.012509.1325789838255235021.davem@davemloft.net> <4FFD3CD9.4030206@redhat.com> <84304C03-6A49-4263-9016-05486EDC0E98@codesourcery.com> <4FFD4114.9000806@redhat.com> <20120711112235.B28CA2C099@topped-with-meat.com> <7FBB4F87-9FF3-4239-818F-5A38C8094011@codesourcery.com> <20120725181300.DD1812C0B5@topped-with-meat.com> In-Reply-To: <20120725181300.DD1812C0B5@topped-with-meat.com> Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: 7bit X-IsSubscribed: yes Mailing-List: contact libc-ports-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Post: List-Help: , Sender: libc-ports-owner@sourceware.org X-SW-Source: 2012-07/txt/msg00052.txt.bz2 On 7/25/2012 2:13 PM, Roland McGrath wrote: > Here I think the reasonable thing to do is: > > /* A machine-specific version can define SPIN_LOCK_READS_BETWEEN_CMPXCHG > to the number of plain reads that it's optimal to spin on between uses > of atomic_compare_and_exchange_val_acq. If spinning forever is optimal > then use -1. If no plain reads here would ever be optimal, use 0. */ > #ifndef SPIN_LOCK_READS_BETWEEN_CMPXCHG > # warning machine-dependent file should define SPIN_LOCK_READS_BETWEEN_CMPXCHG > # define SPIN_LOCK_READS_BETWEEN_CMPXCHG 1000 > #endif > > Then ARM et al can do: > > /* Machine-dependent rationale about the selection of this value. */ > #define SPIN_LOCK_READS_BETWEEN_CMPXCHG 1000 > #include > > while Teil will use -1. The tile architecture is unlikely to use this generic version no matter what; see http://sourceware.org/ml/libc-ports/2012-07/msg00030.html for the details, but the primary point is that in a mesh-based architecture it's a bad idea to ever end up in a situation where all the cores can be spinning issues loads or cmpxchg as fast as they can, so some kind of backoff is necessary. -- Chris Metcalf, Tilera Corp. http://www.tilera.com