From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 21355 invoked by alias); 14 May 2018 04:06:18 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Received: (qmail 21336 invoked by uid 89); 14 May 2018 04:06:17 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_PASS autolearn=ham version=3.3.2 spammy=Hx-spam-relays-external:HELO, H*RU:HELO, conference, Conference X-HELO: mga02.intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 Subject: Re: [PATCH v2 1/3] Tunables: Add tunables of spin count for pthread adaptive spin mutex To: Florian Weimer , Adhemerval Zanella , Glibc alpha Cc: Dave Hansen , Tim Chen , Andi Kleen , Ying Huang , Aaron Lu , Lu Aubrey References: <1524624988-29141-1-git-send-email-kemi.wang@intel.com> <0c66f19d-c0e8-accd-85dd-7e55dd6da1af@redhat.com> <55c818fb-1b7e-47d0-0287-2ea33ce69fd5@intel.com> <3b8bb7bf-68a5-4084-e4dd-bb4fe4411bef@redhat.com> From: kemi Message-ID: <8dbdd127-01ad-5341-1824-52bd18f1d183@intel.com> Date: Mon, 14 May 2018 04:06:00 -0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 MIME-Version: 1.0 In-Reply-To: <3b8bb7bf-68a5-4084-e4dd-bb4fe4411bef@redhat.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit X-SW-Source: 2018-05/txt/msg00543.txt.bz2 On 2018年05月08日 23:44, Florian Weimer wrote: > On 05/02/2018 01:06 PM, kemi wrote: >> Hi, Florian >>      Thanks for your time to review. >> >> On 2018年05月02日 16:04, Florian Weimer wrote: >>> On 04/25/2018 04:56 AM, Kemi Wang wrote: >>> >>>> +  mutex { >>>> +    spin_count { >>>> +      type: INT_32 >>>> +      minval: 0 >>>> +      maxval: 30000 >>>> +      default: 1000 >>>> +    } >>> >>> How did you come up with the default and maximum values?  Larger maximum values might be useful for testing boundary conditions. >>> >> >> For the maximum value of spin count: >> Please notice that mutex->__data.__spins += (cnt - mutex->__data.__spins) / 8, and the variable *cnt* could reach >> the value of spin count due to spinning timeout. In such case, mutex->__data.__spins is increased and could be close to *cnt* >> (close to the value of spin count). Keeping the value of spin count less than MAX_SHORT can avoid the overflow of >>   mutex->__data.__spins variable with the possible type of short. > > Could you add this as a comment, please? > Sure:) >> For the default value of spin count: >> I referred to the previous number of 100 times for trylock in the loop. When this mode is changed to read only while spinning. >> I suppose the value could be larger because of lower overhead and latency of read compared with cmpxchg. > > Ahh, makes sense.  Perhaps put this information into the commit message. > I investigated more on the default value of spin count recently. It's obvious that we should provide a larger default value since the spinning way would be changed from "TRYLOCK"(cmpxchg) to read only while spinning. But it's not a trivial issue to determine which one is the best if possible. The latency of atomic operation and read (e.g. cmpxchg) is determined by many factors, such as the position of cache line by which the data is owned, the state of cache line(M/E/S/I or even O), cache line transformation and etc. And some research report[1](Fig2 in that paper) before has shown that the latency of cmpxchg is 1.5x longer than single read at the same condition in Haswell. So, let's set the default value of spin count as 150, and run some benchmark to test it. What's your idea? [1] Lesani, Mohsen, Todd Millstein, and Jens Palsberg. "Automatic Atomicity Verification for Clients of Concurrent Data Structures." International Conference on Computer Aided Verification. Springer, Cham, 2014. >> Perhaps we should make the default value of spin count differently according to architecture. > > Sure, or if there is just a single good choice for the tunable, just use that and remove the tunable again.  I guess one aspect here is to experiment with different values and see if there's a clear winner. > Two reasons for keeping the tunables here: 1) The overhead of instructions are architecture-specific, so, it is hard or even impossible to have a perfect default value that fits all the architecture well. E.g. The pause instruction in the Skylake platform is 10x expensive than before. 2) There are kinds of workload which may need a different spin timeout. I have heard many grumble of the pthread adaptive spin mutex from customers that does not work well in their practical workload. Let's keep a tunable here for them. >>>> +# define TUNABLE_CALLBACK_FNDECL(__name, __type)            \ >>>> +static inline void                        \ >>>> +__always_inline                            \ >>>> +do_set_mutex_ ## __name (__type value)            \ >>>> +{                                \ >>>> +  __mutex_aconf.__name = value;                \ >>>> +}                                \ >>>> +void                                \ >>>> +TUNABLE_CALLBACK (set_mutex_ ## __name) (tunable_val_t *valp) \ >>>> +{                                \ >>>> +  __type value = (__type) (valp)->numval;            \ >>>> +  do_set_mutex_ ## __name (value);                \ >>>> +} >>>> + >>>> +TUNABLE_CALLBACK_FNDECL (spin_count, int32_t); >>> >>> I'm not sure if the macro is helpful in this context. > >> It is a matter of taste. >> But, perhaps we have other mutex tunables in future. > > We can still macroize the code at that point.  But no strong preference here. > That's all right. >>>> +void (*const __pthread_mutex_tunables_init_array []) (int, char **, char **) >>>> +  __attribute__ ((section (INIT_SECTION), aligned (sizeof (void *)))) = >>>> +{ >>>> +  &mutex_tunables_init >>>> +}; >>> >>> Can't you perform the initialization as part of overall pthread initialization?  This would avoid the extra relocation. > >> Thanks for your suggestion. I am not sure how to do it now and will take a look at it. > > The code would go into nptl/nptl-init.c.  It's just an idea, but I think it should be possible to make it work. > thanks, will take a look at it and see whether we can get benefit. > Thanks, > Florian