From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ot1-x329.google.com (mail-ot1-x329.google.com [IPv6:2607:f8b0:4864:20::329]) by sourceware.org (Postfix) with ESMTPS id 72F3C3858283; Wed, 28 Sep 2022 13:55:11 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 72F3C3858283 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-ot1-x329.google.com with SMTP id r22-20020a9d7516000000b00659ef017e34so8195441otk.13; Wed, 28 Sep 2022 06:55:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date; bh=J+E8oZAOrioAVYwAN905fYcMGdZPloWpctJeLWpT4eA=; b=VtrBCBxkPjoYmF0pi0mtn8Xb5/6cunPgKUN5vbwFKpVztnFgJOQ/6KTVcZGvRLvPX2 /YJ13DfsmvUhPWHyCc3dfgdlPoLg/YdhNCFUkMtRHFQQBnPsQZBOPDlJZtz5K5RLPebE qCnIjkZXyTnrny9oiw6dk0sjPpw0cJaeW3D1IJacbVYytCO87HhjTXCY0omjV+HAtSo1 i1GVtA6yXy5F7LDO3oqf7zfmhHhLknzia1p4g9dbWMUioAnxxhuSiUPCM5CHueIz+I7v JB6EBcNDqa1vh1bFTxYebr0zaLYr4brnArlV+Z3459jvBJ/im1SL1l5Z1EvU16pBh/8M 4FHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date; bh=J+E8oZAOrioAVYwAN905fYcMGdZPloWpctJeLWpT4eA=; b=rizQ8bg37dbvIn2VfzKh5/IUThu00vI6plPWlSNSZL2v7dWBh3e+ZV3FHq7Z4GdPup 1aIGoSNMkHhWtaMVVgIiLR41gWsB3aNkWlK/LhsYNfac5baVvelMrTy6XR3hFLmV8y/Y qyWkxUKUBwY5lgCPkqkPpsVzNXVtFaXUW30VxDYgQsT2BBaerntLks77VBma8dNXNbg9 aJ7nITtkEZLKL9ChULnYq/xVT1L9zLTMLHcz03Eh9Tp1gVbYoDAn2Lk4szmsdqLQ7kdq q+b16bDJ9uDfFKQk4MGK3kQcBCRQG65L1t9vOz/hqf5iwyXkHKQDszIw+tF+tBTGzwFW tHtg== X-Gm-Message-State: ACrzQf3aEp+c+1qjgUM+I/x9FR8hx08qvqINtzFeEup/2j2QBPM3VxJA fqW8iJnktVXnUPxIXi3hgltuYx1JKWj9Sb3vHZSrkr0J X-Google-Smtp-Source: AMsMyM49P6FGcip0dSsp6EVc9ZxASQvJCd/qPQ1MsRC9TVgftQPC2w1mackFg5m7qDe9AXpB/diLptKP2+8+7haiRz0= X-Received: by 2002:a05:6830:410e:b0:657:f539:708c with SMTP id w14-20020a056830410e00b00657f539708cmr14842228ott.244.1664373310395; Wed, 28 Sep 2022 06:55:10 -0700 (PDT) MIME-Version: 1.0 References: <20210419233607.916848-1-goldstein.w.n@gmail.com> <20210419233607.916848-2-goldstein.w.n@gmail.com> In-Reply-To: From: Sunil Pandey Date: Wed, 28 Sep 2022 06:54:34 -0700 Message-ID: Subject: Re: [PATCH v5 2/2] x86: Optimize strlen-avx2.S To: Noah Goldstein , Libc-stable Mailing List , Hongjiu Lu Cc: GNU C Library Content-Type: multipart/mixed; boundary="000000000000f0c82305e9bd1b70" X-Spam-Status: No, score=-7.5 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_ENVFROM_END_DIGIT,FREEMAIL_FROM,GIT_PATCH_0,HK_RANDOM_ENVFROM,HK_RANDOM_FROM,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: --000000000000f0c82305e9bd1b70 Content-Type: text/plain; charset="UTF-8" Attached patch fixes BZ# 29611. I would like to backport it to 2.32,2.31,2.30,2.29 and 2.29. Let me know if there is any objection. On Sun, Sep 25, 2022 at 7:00 AM Noah Goldstein via Libc-alpha wrote: > > On Sun, Sep 25, 2022 at 1:19 AM Aurelien Jarno wrote: > > > > On 2021-04-19 19:36, Noah Goldstein via Libc-alpha wrote: > > > No bug. This commit optimizes strlen-avx2.S. The optimizations are > > > mostly small things but they add up to roughly 10-30% performance > > > improvement for strlen. The results for strnlen are bit more > > > ambiguous. test-strlen, test-strnlen, test-wcslen, and test-wcsnlen > > > are all passing. > > > > > > Signed-off-by: Noah Goldstein > > > --- > > > sysdeps/x86_64/multiarch/ifunc-impl-list.c | 16 +- > > > sysdeps/x86_64/multiarch/strlen-avx2.S | 532 +++++++++++++-------- > > > 2 files changed, 334 insertions(+), 214 deletions(-) > > > > > > diff --git a/sysdeps/x86_64/multiarch/ifunc-impl-list.c b/sysdeps/x86_64/multiarch/ifunc-impl-list.c > > > index c377cab629..651b32908e 100644 > > > --- a/sysdeps/x86_64/multiarch/ifunc-impl-list.c > > > +++ b/sysdeps/x86_64/multiarch/ifunc-impl-list.c > > > @@ -293,10 +293,12 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, > > > /* Support sysdeps/x86_64/multiarch/strlen.c. */ > > > IFUNC_IMPL (i, name, strlen, > > > IFUNC_IMPL_ADD (array, i, strlen, > > > - CPU_FEATURE_USABLE (AVX2), > > > + (CPU_FEATURE_USABLE (AVX2) > > > + && CPU_FEATURE_USABLE (BMI2)), > > > __strlen_avx2) > > > IFUNC_IMPL_ADD (array, i, strlen, > > > (CPU_FEATURE_USABLE (AVX2) > > > + && CPU_FEATURE_USABLE (BMI2) > > > && CPU_FEATURE_USABLE (RTM)), > > > __strlen_avx2_rtm) > > > IFUNC_IMPL_ADD (array, i, strlen, > > > @@ -309,10 +311,12 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, > > > /* Support sysdeps/x86_64/multiarch/strnlen.c. */ > > > IFUNC_IMPL (i, name, strnlen, > > > IFUNC_IMPL_ADD (array, i, strnlen, > > > - CPU_FEATURE_USABLE (AVX2), > > > + (CPU_FEATURE_USABLE (AVX2) > > > + && CPU_FEATURE_USABLE (BMI2)), > > > __strnlen_avx2) > > > IFUNC_IMPL_ADD (array, i, strnlen, > > > (CPU_FEATURE_USABLE (AVX2) > > > + && CPU_FEATURE_USABLE (BMI2) > > > && CPU_FEATURE_USABLE (RTM)), > > > __strnlen_avx2_rtm) > > > IFUNC_IMPL_ADD (array, i, strnlen, > > > @@ -654,10 +658,12 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, > > > /* Support sysdeps/x86_64/multiarch/wcslen.c. */ > > > IFUNC_IMPL (i, name, wcslen, > > > IFUNC_IMPL_ADD (array, i, wcslen, > > > - CPU_FEATURE_USABLE (AVX2), > > > + (CPU_FEATURE_USABLE (AVX2) > > > + && CPU_FEATURE_USABLE (BMI2)), > > > __wcslen_avx2) > > > IFUNC_IMPL_ADD (array, i, wcslen, > > > (CPU_FEATURE_USABLE (AVX2) > > > + && CPU_FEATURE_USABLE (BMI2) > > > && CPU_FEATURE_USABLE (RTM)), > > > __wcslen_avx2_rtm) > > > IFUNC_IMPL_ADD (array, i, wcslen, > > > @@ -670,10 +676,12 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, > > > /* Support sysdeps/x86_64/multiarch/wcsnlen.c. */ > > > IFUNC_IMPL (i, name, wcsnlen, > > > IFUNC_IMPL_ADD (array, i, wcsnlen, > > > - CPU_FEATURE_USABLE (AVX2), > > > + (CPU_FEATURE_USABLE (AVX2) > > > + && CPU_FEATURE_USABLE (BMI2)), > > > __wcsnlen_avx2) > > > IFUNC_IMPL_ADD (array, i, wcsnlen, > > > (CPU_FEATURE_USABLE (AVX2) > > > + && CPU_FEATURE_USABLE (BMI2) > > > && CPU_FEATURE_USABLE (RTM)), > > > __wcsnlen_avx2_rtm) > > > IFUNC_IMPL_ADD (array, i, wcsnlen, > > > diff --git a/sysdeps/x86_64/multiarch/strlen-avx2.S b/sysdeps/x86_64/multiarch/strlen-avx2.S > > > index 1caae9e6bc..bd2e6ee44a 100644 > > > --- a/sysdeps/x86_64/multiarch/strlen-avx2.S > > > +++ b/sysdeps/x86_64/multiarch/strlen-avx2.S > > > @@ -27,9 +27,11 @@ > > > # ifdef USE_AS_WCSLEN > > > # define VPCMPEQ vpcmpeqd > > > # define VPMINU vpminud > > > +# define CHAR_SIZE 4 > > > # else > > > # define VPCMPEQ vpcmpeqb > > > # define VPMINU vpminub > > > +# define CHAR_SIZE 1 > > > # endif > > > > > > # ifndef VZEROUPPER > > > @@ -41,349 +43,459 @@ > > > # endif > > > > > > # define VEC_SIZE 32 > > > +# define PAGE_SIZE 4096 > > > > > > .section SECTION(.text),"ax",@progbits > > > ENTRY (STRLEN) > > > # ifdef USE_AS_STRNLEN > > > - /* Check for zero length. */ > > > + /* Check zero length. */ > > > test %RSI_LP, %RSI_LP > > > jz L(zero) > > > + /* Store max len in R8_LP before adjusting if using WCSLEN. */ > > > + mov %RSI_LP, %R8_LP > > > # ifdef USE_AS_WCSLEN > > > shl $2, %RSI_LP > > > # elif defined __ILP32__ > > > /* Clear the upper 32 bits. */ > > > movl %esi, %esi > > > # endif > > > - mov %RSI_LP, %R8_LP > > > # endif > > > - movl %edi, %ecx > > > + movl %edi, %eax > > > movq %rdi, %rdx > > > vpxor %xmm0, %xmm0, %xmm0 > > > - > > > + /* Clear high bits from edi. Only keeping bits relevant to page > > > + cross check. */ > > > + andl $(PAGE_SIZE - 1), %eax > > > /* Check if we may cross page boundary with one vector load. */ > > > - andl $(2 * VEC_SIZE - 1), %ecx > > > - cmpl $VEC_SIZE, %ecx > > > - ja L(cros_page_boundary) > > > + cmpl $(PAGE_SIZE - VEC_SIZE), %eax > > > + ja L(cross_page_boundary) > > > > > > /* Check the first VEC_SIZE bytes. */ > > > - VPCMPEQ (%rdi), %ymm0, %ymm1 > > > - vpmovmskb %ymm1, %eax > > > - testl %eax, %eax > > > - > > > + VPCMPEQ (%rdi), %ymm0, %ymm1 > > > + vpmovmskb %ymm1, %eax > > > # ifdef USE_AS_STRNLEN > > > - jnz L(first_vec_x0_check) > > > - /* Adjust length and check the end of data. */ > > > - subq $VEC_SIZE, %rsi > > > - jbe L(max) > > > -# else > > > - jnz L(first_vec_x0) > > > + /* If length < VEC_SIZE handle special. */ > > > + cmpq $VEC_SIZE, %rsi > > > + jbe L(first_vec_x0) > > > # endif > > > - > > > - /* Align data for aligned loads in the loop. */ > > > - addq $VEC_SIZE, %rdi > > > - andl $(VEC_SIZE - 1), %ecx > > > - andq $-VEC_SIZE, %rdi > > > + /* If empty continue to aligned_more. Otherwise return bit > > > + position of first match. */ > > > + testl %eax, %eax > > > + jz L(aligned_more) > > > + tzcntl %eax, %eax > > > +# ifdef USE_AS_WCSLEN > > > + shrl $2, %eax > > > +# endif > > > + VZEROUPPER_RETURN > > > > > > # ifdef USE_AS_STRNLEN > > > - /* Adjust length. */ > > > - addq %rcx, %rsi > > > +L(zero): > > > + xorl %eax, %eax > > > + ret > > > > > > - subq $(VEC_SIZE * 4), %rsi > > > - jbe L(last_4x_vec_or_less) > > > + .p2align 4 > > > +L(first_vec_x0): > > > + /* Set bit for max len so that tzcnt will return min of max len > > > + and position of first match. */ > > > + btsq %rsi, %rax > > > + tzcntl %eax, %eax > > > +# ifdef USE_AS_WCSLEN > > > + shrl $2, %eax > > > +# endif > > > + VZEROUPPER_RETURN > > > # endif > > > - jmp L(more_4x_vec) > > > > > > .p2align 4 > > > -L(cros_page_boundary): > > > - andl $(VEC_SIZE - 1), %ecx > > > - andq $-VEC_SIZE, %rdi > > > - VPCMPEQ (%rdi), %ymm0, %ymm1 > > > - vpmovmskb %ymm1, %eax > > > - /* Remove the leading bytes. */ > > > - sarl %cl, %eax > > > - testl %eax, %eax > > > - jz L(aligned_more) > > > +L(first_vec_x1): > > > tzcntl %eax, %eax > > > + /* Safe to use 32 bit instructions as these are only called for > > > + size = [1, 159]. */ > > > # ifdef USE_AS_STRNLEN > > > - /* Check the end of data. */ > > > - cmpq %rax, %rsi > > > - jbe L(max) > > > + /* Use ecx which was computed earlier to compute correct value. > > > + */ > > > + subl $(VEC_SIZE * 4 + 1), %ecx > > > + addl %ecx, %eax > > > +# else > > > + subl %edx, %edi > > > + incl %edi > > > + addl %edi, %eax > > > # endif > > > - addq %rdi, %rax > > > - addq %rcx, %rax > > > - subq %rdx, %rax > > > # ifdef USE_AS_WCSLEN > > > - shrq $2, %rax > > > + shrl $2, %eax > > > # endif > > > -L(return_vzeroupper): > > > - ZERO_UPPER_VEC_REGISTERS_RETURN > > > + VZEROUPPER_RETURN > > > > > > .p2align 4 > > > -L(aligned_more): > > > +L(first_vec_x2): > > > + tzcntl %eax, %eax > > > + /* Safe to use 32 bit instructions as these are only called for > > > + size = [1, 159]. */ > > > # ifdef USE_AS_STRNLEN > > > - /* "rcx" is less than VEC_SIZE. Calculate "rdx + rcx - VEC_SIZE" > > > - with "rdx - (VEC_SIZE - rcx)" instead of "(rdx + rcx) - VEC_SIZE" > > > - to void possible addition overflow. */ > > > - negq %rcx > > > - addq $VEC_SIZE, %rcx > > > - > > > - /* Check the end of data. */ > > > - subq %rcx, %rsi > > > - jbe L(max) > > > + /* Use ecx which was computed earlier to compute correct value. > > > + */ > > > + subl $(VEC_SIZE * 3 + 1), %ecx > > > + addl %ecx, %eax > > > +# else > > > + subl %edx, %edi > > > + addl $(VEC_SIZE + 1), %edi > > > + addl %edi, %eax > > > # endif > > > +# ifdef USE_AS_WCSLEN > > > + shrl $2, %eax > > > +# endif > > > + VZEROUPPER_RETURN > > > > > > - addq $VEC_SIZE, %rdi > > > + .p2align 4 > > > +L(first_vec_x3): > > > + tzcntl %eax, %eax > > > + /* Safe to use 32 bit instructions as these are only called for > > > + size = [1, 159]. */ > > > +# ifdef USE_AS_STRNLEN > > > + /* Use ecx which was computed earlier to compute correct value. > > > + */ > > > + subl $(VEC_SIZE * 2 + 1), %ecx > > > + addl %ecx, %eax > > > +# else > > > + subl %edx, %edi > > > + addl $(VEC_SIZE * 2 + 1), %edi > > > + addl %edi, %eax > > > +# endif > > > +# ifdef USE_AS_WCSLEN > > > + shrl $2, %eax > > > +# endif > > > + VZEROUPPER_RETURN > > > > > > + .p2align 4 > > > +L(first_vec_x4): > > > + tzcntl %eax, %eax > > > + /* Safe to use 32 bit instructions as these are only called for > > > + size = [1, 159]. */ > > > # ifdef USE_AS_STRNLEN > > > - subq $(VEC_SIZE * 4), %rsi > > > - jbe L(last_4x_vec_or_less) > > > + /* Use ecx which was computed earlier to compute correct value. > > > + */ > > > + subl $(VEC_SIZE + 1), %ecx > > > + addl %ecx, %eax > > > +# else > > > + subl %edx, %edi > > > + addl $(VEC_SIZE * 3 + 1), %edi > > > + addl %edi, %eax > > > # endif > > > +# ifdef USE_AS_WCSLEN > > > + shrl $2, %eax > > > +# endif > > > + VZEROUPPER_RETURN > > > > > > -L(more_4x_vec): > > > + .p2align 5 > > > +L(aligned_more): > > > + /* Align data to VEC_SIZE - 1. This is the same number of > > > + instructions as using andq with -VEC_SIZE but saves 4 bytes of > > > + code on the x4 check. */ > > > + orq $(VEC_SIZE - 1), %rdi > > > +L(cross_page_continue): > > > /* Check the first 4 * VEC_SIZE. Only one VEC_SIZE at a time > > > since data is only aligned to VEC_SIZE. */ > > > - VPCMPEQ (%rdi), %ymm0, %ymm1 > > > - vpmovmskb %ymm1, %eax > > > - testl %eax, %eax > > > - jnz L(first_vec_x0) > > > - > > > - VPCMPEQ VEC_SIZE(%rdi), %ymm0, %ymm1 > > > - vpmovmskb %ymm1, %eax > > > +# ifdef USE_AS_STRNLEN > > > + /* + 1 because rdi is aligned to VEC_SIZE - 1. + CHAR_SIZE because > > > + it simplies the logic in last_4x_vec_or_less. */ > > > + leaq (VEC_SIZE * 4 + CHAR_SIZE + 1)(%rdi), %rcx > > > + subq %rdx, %rcx > > > +# endif > > > + /* Load first VEC regardless. */ > > > + VPCMPEQ 1(%rdi), %ymm0, %ymm1 > > > +# ifdef USE_AS_STRNLEN > > > + /* Adjust length. If near end handle specially. */ > > > + subq %rcx, %rsi > > > + jb L(last_4x_vec_or_less) > > > +# endif > > > + vpmovmskb %ymm1, %eax > > > testl %eax, %eax > > > jnz L(first_vec_x1) > > > > > > - VPCMPEQ (VEC_SIZE * 2)(%rdi), %ymm0, %ymm1 > > > - vpmovmskb %ymm1, %eax > > > + VPCMPEQ (VEC_SIZE + 1)(%rdi), %ymm0, %ymm1 > > > + vpmovmskb %ymm1, %eax > > > testl %eax, %eax > > > jnz L(first_vec_x2) > > > > > > - VPCMPEQ (VEC_SIZE * 3)(%rdi), %ymm0, %ymm1 > > > - vpmovmskb %ymm1, %eax > > > + VPCMPEQ (VEC_SIZE * 2 + 1)(%rdi), %ymm0, %ymm1 > > > + vpmovmskb %ymm1, %eax > > > testl %eax, %eax > > > jnz L(first_vec_x3) > > > > > > - addq $(VEC_SIZE * 4), %rdi > > > - > > > -# ifdef USE_AS_STRNLEN > > > - subq $(VEC_SIZE * 4), %rsi > > > - jbe L(last_4x_vec_or_less) > > > -# endif > > > - > > > - /* Align data to 4 * VEC_SIZE. */ > > > - movq %rdi, %rcx > > > - andl $(4 * VEC_SIZE - 1), %ecx > > > - andq $-(4 * VEC_SIZE), %rdi > > > + VPCMPEQ (VEC_SIZE * 3 + 1)(%rdi), %ymm0, %ymm1 > > > + vpmovmskb %ymm1, %eax > > > + testl %eax, %eax > > > + jnz L(first_vec_x4) > > > > > > + /* Align data to VEC_SIZE * 4 - 1. */ > > > # ifdef USE_AS_STRNLEN > > > - /* Adjust length. */ > > > + /* Before adjusting length check if at last VEC_SIZE * 4. */ > > > + cmpq $(VEC_SIZE * 4 - 1), %rsi > > > + jbe L(last_4x_vec_or_less_load) > > > + incq %rdi > > > + movl %edi, %ecx > > > + orq $(VEC_SIZE * 4 - 1), %rdi > > > + andl $(VEC_SIZE * 4 - 1), %ecx > > > + /* Readjust length. */ > > > addq %rcx, %rsi > > > +# else > > > + incq %rdi > > > + orq $(VEC_SIZE * 4 - 1), %rdi > > > # endif > > > - > > > + /* Compare 4 * VEC at a time forward. */ > > > .p2align 4 > > > L(loop_4x_vec): > > > - /* Compare 4 * VEC at a time forward. */ > > > - vmovdqa (%rdi), %ymm1 > > > - vmovdqa VEC_SIZE(%rdi), %ymm2 > > > - vmovdqa (VEC_SIZE * 2)(%rdi), %ymm3 > > > - vmovdqa (VEC_SIZE * 3)(%rdi), %ymm4 > > > - VPMINU %ymm1, %ymm2, %ymm5 > > > - VPMINU %ymm3, %ymm4, %ymm6 > > > - VPMINU %ymm5, %ymm6, %ymm5 > > > - > > > - VPCMPEQ %ymm5, %ymm0, %ymm5 > > > - vpmovmskb %ymm5, %eax > > > - testl %eax, %eax > > > - jnz L(4x_vec_end) > > > - > > > - addq $(VEC_SIZE * 4), %rdi > > > - > > > -# ifndef USE_AS_STRNLEN > > > - jmp L(loop_4x_vec) > > > -# else > > > +# ifdef USE_AS_STRNLEN > > > + /* Break if at end of length. */ > > > subq $(VEC_SIZE * 4), %rsi > > > - ja L(loop_4x_vec) > > > - > > > -L(last_4x_vec_or_less): > > > - /* Less than 4 * VEC and aligned to VEC_SIZE. */ > > > - addl $(VEC_SIZE * 2), %esi > > > - jle L(last_2x_vec) > > > + jb L(last_4x_vec_or_less_cmpeq) > > > +# endif > > > + /* Save some code size by microfusing VPMINU with the load. Since > > > + the matches in ymm2/ymm4 can only be returned if there where no > > > + matches in ymm1/ymm3 respectively there is no issue with overlap. > > > + */ > > > + vmovdqa 1(%rdi), %ymm1 > > > + VPMINU (VEC_SIZE + 1)(%rdi), %ymm1, %ymm2 > > > + vmovdqa (VEC_SIZE * 2 + 1)(%rdi), %ymm3 > > > + VPMINU (VEC_SIZE * 3 + 1)(%rdi), %ymm3, %ymm4 > > > + > > > + VPMINU %ymm2, %ymm4, %ymm5 > > > + VPCMPEQ %ymm5, %ymm0, %ymm5 > > > + vpmovmskb %ymm5, %ecx > > > > > > - VPCMPEQ (%rdi), %ymm0, %ymm1 > > > - vpmovmskb %ymm1, %eax > > > - testl %eax, %eax > > > - jnz L(first_vec_x0) > > > + subq $-(VEC_SIZE * 4), %rdi > > > + testl %ecx, %ecx > > > + jz L(loop_4x_vec) > > > > > > - VPCMPEQ VEC_SIZE(%rdi), %ymm0, %ymm1 > > > - vpmovmskb %ymm1, %eax > > > - testl %eax, %eax > > > - jnz L(first_vec_x1) > > > > > > - VPCMPEQ (VEC_SIZE * 2)(%rdi), %ymm0, %ymm1 > > > - vpmovmskb %ymm1, %eax > > > + VPCMPEQ %ymm1, %ymm0, %ymm1 > > > + vpmovmskb %ymm1, %eax > > > + subq %rdx, %rdi > > > testl %eax, %eax > > > + jnz L(last_vec_return_x0) > > > > > > - jnz L(first_vec_x2_check) > > > - subl $VEC_SIZE, %esi > > > - jle L(max) > > > - > > > - VPCMPEQ (VEC_SIZE * 3)(%rdi), %ymm0, %ymm1 > > > - vpmovmskb %ymm1, %eax > > > + VPCMPEQ %ymm2, %ymm0, %ymm2 > > > + vpmovmskb %ymm2, %eax > > > testl %eax, %eax > > > - > > > - jnz L(first_vec_x3_check) > > > - movq %r8, %rax > > > -# ifdef USE_AS_WCSLEN > > > + jnz L(last_vec_return_x1) > > > + > > > + /* Combine last 2 VEC. */ > > > + VPCMPEQ %ymm3, %ymm0, %ymm3 > > > + vpmovmskb %ymm3, %eax > > > + /* rcx has combined result from all 4 VEC. It will only be used if > > > + the first 3 other VEC all did not contain a match. */ > > > + salq $32, %rcx > > > + orq %rcx, %rax > > > + tzcntq %rax, %rax > > > + subq $(VEC_SIZE * 2 - 1), %rdi > > > + addq %rdi, %rax > > > +# ifdef USE_AS_WCSLEN > > > shrq $2, %rax > > > -# endif > > > +# endif > > > VZEROUPPER_RETURN > > > > > > + > > > +# ifdef USE_AS_STRNLEN > > > .p2align 4 > > > -L(last_2x_vec): > > > - addl $(VEC_SIZE * 2), %esi > > > - VPCMPEQ (%rdi), %ymm0, %ymm1 > > > - vpmovmskb %ymm1, %eax > > > - testl %eax, %eax > > > +L(last_4x_vec_or_less_load): > > > + /* Depending on entry adjust rdi / prepare first VEC in ymm1. */ > > > + subq $-(VEC_SIZE * 4), %rdi > > > +L(last_4x_vec_or_less_cmpeq): > > > + VPCMPEQ 1(%rdi), %ymm0, %ymm1 > > > +L(last_4x_vec_or_less): > > > > > > - jnz L(first_vec_x0_check) > > > - subl $VEC_SIZE, %esi > > > - jle L(max) > > > + vpmovmskb %ymm1, %eax > > > + /* If remaining length > VEC_SIZE * 2. This works if esi is off by > > > + VEC_SIZE * 4. */ > > > + testl $(VEC_SIZE * 2), %esi > > > + jnz L(last_4x_vec) > > > > > > - VPCMPEQ VEC_SIZE(%rdi), %ymm0, %ymm1 > > > - vpmovmskb %ymm1, %eax > > > + /* length may have been negative or positive by an offset of > > > + VEC_SIZE * 4 depending on where this was called from. This fixes > > > + that. */ > > > + andl $(VEC_SIZE * 4 - 1), %esi > > > testl %eax, %eax > > > - jnz L(first_vec_x1_check) > > > - movq %r8, %rax > > > -# ifdef USE_AS_WCSLEN > > > - shrq $2, %rax > > > -# endif > > > - VZEROUPPER_RETURN > > > + jnz L(last_vec_x1_check) > > > > > > - .p2align 4 > > > -L(first_vec_x0_check): > > > + subl $VEC_SIZE, %esi > > > + jb L(max) > > > + > > > + VPCMPEQ (VEC_SIZE + 1)(%rdi), %ymm0, %ymm1 > > > + vpmovmskb %ymm1, %eax > > > tzcntl %eax, %eax > > > /* Check the end of data. */ > > > - cmpq %rax, %rsi > > > - jbe L(max) > > > + cmpl %eax, %esi > > > + jb L(max) > > > + subq %rdx, %rdi > > > + addl $(VEC_SIZE + 1), %eax > > > addq %rdi, %rax > > > - subq %rdx, %rax > > > # ifdef USE_AS_WCSLEN > > > shrq $2, %rax > > > # endif > > > VZEROUPPER_RETURN > > > +# endif > > > > > > .p2align 4 > > > -L(first_vec_x1_check): > > > +L(last_vec_return_x0): > > > tzcntl %eax, %eax > > > - /* Check the end of data. */ > > > - cmpq %rax, %rsi > > > - jbe L(max) > > > - addq $VEC_SIZE, %rax > > > + subq $(VEC_SIZE * 4 - 1), %rdi > > > addq %rdi, %rax > > > - subq %rdx, %rax > > > -# ifdef USE_AS_WCSLEN > > > +# ifdef USE_AS_WCSLEN > > > shrq $2, %rax > > > -# endif > > > +# endif > > > VZEROUPPER_RETURN > > > > > > .p2align 4 > > > -L(first_vec_x2_check): > > > +L(last_vec_return_x1): > > > tzcntl %eax, %eax > > > - /* Check the end of data. */ > > > - cmpq %rax, %rsi > > > - jbe L(max) > > > - addq $(VEC_SIZE * 2), %rax > > > + subq $(VEC_SIZE * 3 - 1), %rdi > > > addq %rdi, %rax > > > - subq %rdx, %rax > > > -# ifdef USE_AS_WCSLEN > > > +# ifdef USE_AS_WCSLEN > > > shrq $2, %rax > > > -# endif > > > +# endif > > > VZEROUPPER_RETURN > > > > > > +# ifdef USE_AS_STRNLEN > > > .p2align 4 > > > -L(first_vec_x3_check): > > > +L(last_vec_x1_check): > > > + > > > tzcntl %eax, %eax > > > /* Check the end of data. */ > > > - cmpq %rax, %rsi > > > - jbe L(max) > > > - addq $(VEC_SIZE * 3), %rax > > > + cmpl %eax, %esi > > > + jb L(max) > > > + subq %rdx, %rdi > > > + incl %eax > > > addq %rdi, %rax > > > - subq %rdx, %rax > > > # ifdef USE_AS_WCSLEN > > > shrq $2, %rax > > > # endif > > > VZEROUPPER_RETURN > > > > > > - .p2align 4 > > > L(max): > > > movq %r8, %rax > > > + VZEROUPPER_RETURN > > > + > > > + .p2align 4 > > > +L(last_4x_vec): > > > + /* Test first 2x VEC normally. */ > > > + testl %eax, %eax > > > + jnz L(last_vec_x1) > > > + > > > + VPCMPEQ (VEC_SIZE + 1)(%rdi), %ymm0, %ymm1 > > > + vpmovmskb %ymm1, %eax > > > + testl %eax, %eax > > > + jnz L(last_vec_x2) > > > + > > > + /* Normalize length. */ > > > + andl $(VEC_SIZE * 4 - 1), %esi > > > + VPCMPEQ (VEC_SIZE * 2 + 1)(%rdi), %ymm0, %ymm1 > > > + vpmovmskb %ymm1, %eax > > > + testl %eax, %eax > > > + jnz L(last_vec_x3) > > > + > > > + subl $(VEC_SIZE * 3), %esi > > > + jb L(max) > > > + > > > + VPCMPEQ (VEC_SIZE * 3 + 1)(%rdi), %ymm0, %ymm1 > > > + vpmovmskb %ymm1, %eax > > > + tzcntl %eax, %eax > > > + /* Check the end of data. */ > > > + cmpl %eax, %esi > > > + jb L(max) > > > + subq %rdx, %rdi > > > + addl $(VEC_SIZE * 3 + 1), %eax > > > + addq %rdi, %rax > > > # ifdef USE_AS_WCSLEN > > > shrq $2, %rax > > > # endif > > > VZEROUPPER_RETURN > > > > > > - .p2align 4 > > > -L(zero): > > > - xorl %eax, %eax > > > - ret > > > -# endif > > > > > > .p2align 4 > > > -L(first_vec_x0): > > > +L(last_vec_x1): > > > + /* essentially duplicates of first_vec_x1 but use 64 bit > > > + instructions. */ > > > tzcntl %eax, %eax > > > + subq %rdx, %rdi > > > + incl %eax > > > addq %rdi, %rax > > > - subq %rdx, %rax > > > -# ifdef USE_AS_WCSLEN > > > +# ifdef USE_AS_WCSLEN > > > shrq $2, %rax > > > -# endif > > > +# endif > > > VZEROUPPER_RETURN > > > > > > .p2align 4 > > > -L(first_vec_x1): > > > +L(last_vec_x2): > > > + /* essentially duplicates of first_vec_x1 but use 64 bit > > > + instructions. */ > > > tzcntl %eax, %eax > > > - addq $VEC_SIZE, %rax > > > + subq %rdx, %rdi > > > + addl $(VEC_SIZE + 1), %eax > > > addq %rdi, %rax > > > - subq %rdx, %rax > > > -# ifdef USE_AS_WCSLEN > > > +# ifdef USE_AS_WCSLEN > > > shrq $2, %rax > > > -# endif > > > +# endif > > > VZEROUPPER_RETURN > > > > > > .p2align 4 > > > -L(first_vec_x2): > > > +L(last_vec_x3): > > > tzcntl %eax, %eax > > > - addq $(VEC_SIZE * 2), %rax > > > + subl $(VEC_SIZE * 2), %esi > > > + /* Check the end of data. */ > > > + cmpl %eax, %esi > > > + jb L(max_end) > > > + subq %rdx, %rdi > > > + addl $(VEC_SIZE * 2 + 1), %eax > > > addq %rdi, %rax > > > - subq %rdx, %rax > > > -# ifdef USE_AS_WCSLEN > > > +# ifdef USE_AS_WCSLEN > > > shrq $2, %rax > > > -# endif > > > +# endif > > > + VZEROUPPER_RETURN > > > +L(max_end): > > > + movq %r8, %rax > > > VZEROUPPER_RETURN > > > +# endif > > > > > > + /* Cold case for crossing page with first load. */ > > > .p2align 4 > > > -L(4x_vec_end): > > > - VPCMPEQ %ymm1, %ymm0, %ymm1 > > > - vpmovmskb %ymm1, %eax > > > - testl %eax, %eax > > > - jnz L(first_vec_x0) > > > - VPCMPEQ %ymm2, %ymm0, %ymm2 > > > - vpmovmskb %ymm2, %eax > > > +L(cross_page_boundary): > > > + /* Align data to VEC_SIZE - 1. */ > > > + orq $(VEC_SIZE - 1), %rdi > > > + VPCMPEQ -(VEC_SIZE - 1)(%rdi), %ymm0, %ymm1 > > > + vpmovmskb %ymm1, %eax > > > + /* Remove the leading bytes. sarxl only uses bits [5:0] of COUNT > > > + so no need to manually mod rdx. */ > > > + sarxl %edx, %eax, %eax > > > > This is a BMI2 instruction, which is not necessary available when AVX2 > > is available. This causes SIGILL on some CPU. I have reported that in > > https://sourceware.org/bugzilla/show_bug.cgi?id=29611 > > This is not a bug on master as: > > commit 83c5b368226c34a2f0a5287df40fc290b2b34359 > Author: H.J. Lu > Date: Mon Apr 19 10:45:07 2021 -0700 > > x86-64: Require BMI2 for strchr-avx2.S > > is already in tree. The issue is the avx2 changes where backported > w.o H.J's changes. > > > > Regards > > Aurelien > > > > -- > > Aurelien Jarno GPG: 4096R/1DDD8C9B > > aurelien@aurel32.net http://www.aurel32.net --000000000000f0c82305e9bd1b70 Content-Type: application/octet-stream; name="2.31-2.30-2.29-2.28.patch" Content-Disposition: attachment; filename="2.31-2.30-2.29-2.28.patch" Content-Transfer-Encoding: base64 Content-ID: X-Attachment-Id: f_l8losqtj0 RnJvbSA4NmUxZDg4ZTFhM2MxMjY1OTdlZjM5MTY1Mjc1YWRhNzU2NGNmY2U5IE1vbiBTZXAgMTcg MDA6MDA6MDAgMjAwMQpGcm9tOiAiSC5KLiBMdSIgPGhqbC50b29sc0BnbWFpbC5jb20+CkRhdGU6 IE1vbiwgMTkgQXByIDIwMjEgMTA6NDU6MDcgLTA3MDAKU3ViamVjdDogW1BBVENIXSB4ODYtNjQ6 IFJlcXVpcmUgQk1JMiBmb3Igc3RyY2hyLWF2eDIuUwoKU2luY2Ugc3RyY2hyLWF2eDIuUyB1cGRh dGVkIGJ5Cgpjb21taXQgMWY3NDVlY2MyMTA5ODkwODg2YjE2MWQ0NzkxZTE0MDZmZGZjMjliOApB dXRob3I6IG5vYWggPGdvbGRzdGVpbi53Lm5AZ21haWwuY29tPgpEYXRlOiAgIFdlZCBGZWIgMyAw MDozODo1OSAyMDIxIC0wNTAwCgogICAgeDg2LTY0OiBSZWZhY3RvciBhbmQgaW1wcm92ZSBwZXJm b3JtYW5jZSBvZiBzdHJjaHItYXZ4Mi5TCgp1c2VzIHNhcng6CgpjNCBlMiA3MiBmNyBjMCAgICAg ICAJc2FyeCAgICVlY3gsJWVheCwlZWF4Cgpmb3Igc3RyY2hyLWF2eDIgZmFtaWx5IGZ1bmN0aW9u cywgcmVxdWlyZSBCTUkyIGluIGlmdW5jLWltcGwtbGlzdC5jIGFuZAppZnVuYy1hdngyLmguCgoo Y2hlcnJ5IHBpY2tlZCBmcm9tIGNvbW1pdCA4M2M1YjM2ODIyNmMzNGEyZjBhNTI4N2RmNDBmYzI5 MGIyYjM0MzU5KQotLS0KIHN5c2RlcHMveDg2XzY0L211bHRpYXJjaC9pZnVuYy1hdngyLmggICAg ICB8ICA0ICsrLS0KIHN5c2RlcHMveDg2XzY0L211bHRpYXJjaC9pZnVuYy1pbXBsLWxpc3QuYyB8 IDEyICsrKysrKysrKy0tLQogMiBmaWxlcyBjaGFuZ2VkLCAxMSBpbnNlcnRpb25zKCspLCA1IGRl bGV0aW9ucygtKQoKZGlmZiAtLWdpdCBhL3N5c2RlcHMveDg2XzY0L211bHRpYXJjaC9pZnVuYy1h dngyLmggYi9zeXNkZXBzL3g4Nl82NC9tdWx0aWFyY2gvaWZ1bmMtYXZ4Mi5oCmluZGV4IDc0MTg5 YjZhYTUuLjkyNWU1YjYxZWIgMTAwNjQ0Ci0tLSBhL3N5c2RlcHMveDg2XzY0L211bHRpYXJjaC9p ZnVuYy1hdngyLmgKKysrIGIvc3lzZGVwcy94ODZfNjQvbXVsdGlhcmNoL2lmdW5jLWF2eDIuaApA QCAtMzAsMTEgKzMwLDExIEBAIElGVU5DX1NFTEVDVE9SICh2b2lkKQogICBjb25zdCBzdHJ1Y3Qg Y3B1X2ZlYXR1cmVzKiBjcHVfZmVhdHVyZXMgPSBfX2dldF9jcHVfZmVhdHVyZXMgKCk7CiAKICAg aWYgKENQVV9GRUFUVVJFU19BUkNIX1AgKGNwdV9mZWF0dXJlcywgQVZYMl9Vc2FibGUpCisgICAg ICAmJiBDUFVfRkVBVFVSRVNfQ1BVX1AgKGNwdV9mZWF0dXJlcywgQk1JMikKICAgICAgICYmIENQ VV9GRUFUVVJFU19BUkNIX1AgKGNwdV9mZWF0dXJlcywgQVZYX0Zhc3RfVW5hbGlnbmVkX0xvYWQp KQogICAgIHsKICAgICAgIGlmIChDUFVfRkVBVFVSRVNfQVJDSF9QIChjcHVfZmVhdHVyZXMsIEFW WDUxMlZMX1VzYWJsZSkKLQkgICYmIENQVV9GRUFUVVJFU19BUkNIX1AgKGNwdV9mZWF0dXJlcywg QVZYNTEyQldfVXNhYmxlKQotCSAgJiYgQ1BVX0ZFQVRVUkVTX0NQVV9QIChjcHVfZmVhdHVyZXMs IEJNSTIpKQorCSAgJiYgQ1BVX0ZFQVRVUkVTX0FSQ0hfUCAoY3B1X2ZlYXR1cmVzLCBBVlg1MTJC V19Vc2FibGUpKQogCXJldHVybiBPUFRJTUlaRSAoZXZleCk7CiAKICAgICAgIGlmIChDUFVfRkVB VFVSRVNfQ1BVX1AgKGNwdV9mZWF0dXJlcywgUlRNKSkKZGlmZiAtLWdpdCBhL3N5c2RlcHMveDg2 XzY0L211bHRpYXJjaC9pZnVuYy1pbXBsLWxpc3QuYyBiL3N5c2RlcHMveDg2XzY0L211bHRpYXJj aC9pZnVuYy1pbXBsLWxpc3QuYwppbmRleCA1NmIwNWVlNzQxLi5mNzYzMjZlMGIyIDEwMDY0NAot LS0gYS9zeXNkZXBzL3g4Nl82NC9tdWx0aWFyY2gvaWZ1bmMtaW1wbC1saXN0LmMKKysrIGIvc3lz ZGVwcy94ODZfNjQvbXVsdGlhcmNoL2lmdW5jLWltcGwtbGlzdC5jCkBAIC00MDAsMTAgKzQwMCwx MiBAQCBfX2xpYmNfaWZ1bmNfaW1wbF9saXN0IChjb25zdCBjaGFyICpuYW1lLCBzdHJ1Y3QgbGli Y19pZnVuY19pbXBsICphcnJheSwKICAgLyogU3VwcG9ydCBzeXNkZXBzL3g4Nl82NC9tdWx0aWFy Y2gvc3RyY2hyLmMuICAqLwogICBJRlVOQ19JTVBMIChpLCBuYW1lLCBzdHJjaHIsCiAJICAgICAg SUZVTkNfSU1QTF9BREQgKGFycmF5LCBpLCBzdHJjaHIsCi0JCQkgICAgICBIQVNfQVJDSF9GRUFU VVJFIChBVlgyX1VzYWJsZSksCisJCQkgICAgICAoSEFTX0FSQ0hfRkVBVFVSRSAoQVZYMl9Vc2Fi bGUpCisJCQkgICAgICAgJiYgSEFTX0NQVV9GRUFUVVJFIChCTUkyKSksCiAJCQkgICAgICBfX3N0 cmNocl9hdngyKQogCSAgICAgIElGVU5DX0lNUExfQUREIChhcnJheSwgaSwgc3RyY2hyLAogCQkJ ICAgICAgKEhBU19BUkNIX0ZFQVRVUkUgKEFWWDJfVXNhYmxlKQorCQkJICAgICAgICYmIEhBU19D UFVfRkVBVFVSRSAoQk1JMikKIAkJCSAgICAgICAmJiBIQVNfQ1BVX0ZFQVRVUkUgKFJUTSkpLAog CQkJICAgICAgX19zdHJjaHJfYXZ4Ml9ydG0pCiAJICAgICAgSUZVTkNfSU1QTF9BREQgKGFycmF5 LCBpLCBzdHJjaHIsCkBAIC00MTcsMTAgKzQxOSwxMiBAQCBfX2xpYmNfaWZ1bmNfaW1wbF9saXN0 IChjb25zdCBjaGFyICpuYW1lLCBzdHJ1Y3QgbGliY19pZnVuY19pbXBsICphcnJheSwKICAgLyog U3VwcG9ydCBzeXNkZXBzL3g4Nl82NC9tdWx0aWFyY2gvc3RyY2hybnVsLmMuICAqLwogICBJRlVO Q19JTVBMIChpLCBuYW1lLCBzdHJjaHJudWwsCiAJICAgICAgSUZVTkNfSU1QTF9BREQgKGFycmF5 LCBpLCBzdHJjaHJudWwsCi0JCQkgICAgICBIQVNfQVJDSF9GRUFUVVJFIChBVlgyX1VzYWJsZSks CisJCQkgICAgICAoSEFTX0FSQ0hfRkVBVFVSRSAoQVZYMl9Vc2FibGUpCisJCQkgICAgICAgJiYg SEFTX0NQVV9GRUFUVVJFIChCTUkyKSksCiAJCQkgICAgICBfX3N0cmNocm51bF9hdngyKQogCSAg ICAgIElGVU5DX0lNUExfQUREIChhcnJheSwgaSwgc3RyY2hybnVsLAogCQkJICAgICAgKEhBU19B UkNIX0ZFQVRVUkUgKEFWWDJfVXNhYmxlKQorCQkJICAgICAgICYmIEhBU19DUFVfRkVBVFVSRSAo Qk1JMikKIAkJCSAgICAgICAmJiBIQVNfQ1BVX0ZFQVRVUkUgKFJUTSkpLAogCQkJICAgICAgX19z dHJjaHJudWxfYXZ4Ml9ydG0pCiAJICAgICAgSUZVTkNfSU1QTF9BREQgKGFycmF5LCBpLCBzdHJj aHJudWwsCkBAIC01NzQsMTAgKzU3OCwxMiBAQCBfX2xpYmNfaWZ1bmNfaW1wbF9saXN0IChjb25z dCBjaGFyICpuYW1lLCBzdHJ1Y3QgbGliY19pZnVuY19pbXBsICphcnJheSwKICAgLyogU3VwcG9y dCBzeXNkZXBzL3g4Nl82NC9tdWx0aWFyY2gvd2NzY2hyLmMuICAqLwogICBJRlVOQ19JTVBMIChp LCBuYW1lLCB3Y3NjaHIsCiAJICAgICAgSUZVTkNfSU1QTF9BREQgKGFycmF5LCBpLCB3Y3NjaHIs Ci0JCQkgICAgICBIQVNfQVJDSF9GRUFUVVJFIChBVlgyX1VzYWJsZSksCisJCQkgICAgICAoSEFT X0FSQ0hfRkVBVFVSRSAoQVZYMl9Vc2FibGUpCisJCQkgICAgICAgJiYgSEFTX0NQVV9GRUFUVVJF IChCTUkyKSksCiAJCQkgICAgICBfX3djc2Nocl9hdngyKQogCSAgICAgIElGVU5DX0lNUExfQURE IChhcnJheSwgaSwgd2NzY2hyLAogCQkJICAgICAgKEhBU19BUkNIX0ZFQVRVUkUgKEFWWDJfVXNh YmxlKQorCQkJICAgICAgICYmIEhBU19DUFVfRkVBVFVSRSAoQk1JMikKIAkJCSAgICAgICAmJiBI QVNfQ1BVX0ZFQVRVUkUgKFJUTSkpLAogCQkJICAgICAgX193Y3NjaHJfYXZ4Ml9ydG0pCiAJICAg ICAgSUZVTkNfSU1QTF9BREQgKGFycmF5LCBpLCB3Y3NjaHIsCi0tIAoyLjM2LjEKCg== --000000000000f0c82305e9bd1b70 Content-Type: application/octet-stream; name="2.32.patch" Content-Disposition: attachment; filename="2.32.patch" Content-Transfer-Encoding: base64 Content-ID: X-Attachment-Id: f_l8losqtw1 RnJvbSBjMDZiMjg5MDI3NTg2OGQ3YjhiNGVlYjVkNTdjYjI4Mjg4MTcwODk5IE1vbiBTZXAgMTcg MDA6MDA6MDAgMjAwMQpGcm9tOiAiSC5KLiBMdSIgPGhqbC50b29sc0BnbWFpbC5jb20+CkRhdGU6 IE1vbiwgMTkgQXByIDIwMjEgMTA6NDU6MDcgLTA3MDAKU3ViamVjdDogW1BBVENIXSB4ODYtNjQ6 IFJlcXVpcmUgQk1JMiBmb3Igc3RyY2hyLWF2eDIuUwoKU2luY2Ugc3RyY2hyLWF2eDIuUyB1cGRh dGVkIGJ5Cgpjb21taXQgMWY3NDVlY2MyMTA5ODkwODg2YjE2MWQ0NzkxZTE0MDZmZGZjMjliOApB dXRob3I6IG5vYWggPGdvbGRzdGVpbi53Lm5AZ21haWwuY29tPgpEYXRlOiAgIFdlZCBGZWIgMyAw MDozODo1OSAyMDIxIC0wNTAwCgogICAgeDg2LTY0OiBSZWZhY3RvciBhbmQgaW1wcm92ZSBwZXJm b3JtYW5jZSBvZiBzdHJjaHItYXZ4Mi5TCgp1c2VzIHNhcng6CgpjNCBlMiA3MiBmNyBjMCAgICAg ICAJc2FyeCAgICVlY3gsJWVheCwlZWF4Cgpmb3Igc3RyY2hyLWF2eDIgZmFtaWx5IGZ1bmN0aW9u cywgcmVxdWlyZSBCTUkyIGluIGlmdW5jLWltcGwtbGlzdC5jIGFuZAppZnVuYy1hdngyLmguCgoo Y2hlcnJ5IHBpY2tlZCBmcm9tIGNvbW1pdCA4M2M1YjM2ODIyNmMzNGEyZjBhNTI4N2RmNDBmYzI5 MGIyYjM0MzU5KQotLS0KIHN5c2RlcHMveDg2XzY0L211bHRpYXJjaC9pZnVuYy1hdngyLmggICAg ICB8ICA0ICsrLS0KIHN5c2RlcHMveDg2XzY0L211bHRpYXJjaC9pZnVuYy1pbXBsLWxpc3QuYyB8 IDEyICsrKysrKysrKy0tLQogMiBmaWxlcyBjaGFuZ2VkLCAxMSBpbnNlcnRpb25zKCspLCA1IGRl bGV0aW9ucygtKQoKZGlmZiAtLWdpdCBhL3N5c2RlcHMveDg2XzY0L211bHRpYXJjaC9pZnVuYy1h dngyLmggYi9zeXNkZXBzL3g4Nl82NC9tdWx0aWFyY2gvaWZ1bmMtYXZ4Mi5oCmluZGV4IGY0NTBj Nzg2ZjAuLjBkOWQ4Mzc0ODggMTAwNjQ0Ci0tLSBhL3N5c2RlcHMveDg2XzY0L211bHRpYXJjaC9p ZnVuYy1hdngyLmgKKysrIGIvc3lzZGVwcy94ODZfNjQvbXVsdGlhcmNoL2lmdW5jLWF2eDIuaApA QCAtMzAsMTEgKzMwLDExIEBAIElGVU5DX1NFTEVDVE9SICh2b2lkKQogICBjb25zdCBzdHJ1Y3Qg Y3B1X2ZlYXR1cmVzKiBjcHVfZmVhdHVyZXMgPSBfX2dldF9jcHVfZmVhdHVyZXMgKCk7CiAKICAg aWYgKENQVV9GRUFUVVJFX1VTQUJMRV9QIChjcHVfZmVhdHVyZXMsIEFWWDIpCisgICAgICAmJiBD UFVfRkVBVFVSRV9VU0FCTEVfUCAoY3B1X2ZlYXR1cmVzLCBCTUkyKQogICAgICAgJiYgQ1BVX0ZF QVRVUkVTX0FSQ0hfUCAoY3B1X2ZlYXR1cmVzLCBBVlhfRmFzdF9VbmFsaWduZWRfTG9hZCkpCiAg ICAgewogICAgICAgaWYgKENQVV9GRUFUVVJFX1VTQUJMRV9QIChjcHVfZmVhdHVyZXMsIEFWWDUx MlZMKQotCSAgJiYgQ1BVX0ZFQVRVUkVfVVNBQkxFX1AgKGNwdV9mZWF0dXJlcywgQVZYNTEyQlcp Ci0JICAmJiBDUFVfRkVBVFVSRV9VU0FCTEVfUCAoY3B1X2ZlYXR1cmVzLCBCTUkyKSkKKwkgICYm IENQVV9GRUFUVVJFX1VTQUJMRV9QIChjcHVfZmVhdHVyZXMsIEFWWDUxMkJXKSkKIAlyZXR1cm4g T1BUSU1JWkUgKGV2ZXgpOwogCiAgICAgICBpZiAoQ1BVX0ZFQVRVUkVfVVNBQkxFX1AgKGNwdV9m ZWF0dXJlcywgUlRNKSkKZGlmZiAtLWdpdCBhL3N5c2RlcHMveDg2XzY0L211bHRpYXJjaC9pZnVu Yy1pbXBsLWxpc3QuYyBiL3N5c2RlcHMveDg2XzY0L211bHRpYXJjaC9pZnVuYy1pbXBsLWxpc3Qu YwppbmRleCA5MjBlNjQyNDFlLi5kNGJiZjYxMDMwIDEwMDY0NAotLS0gYS9zeXNkZXBzL3g4Nl82 NC9tdWx0aWFyY2gvaWZ1bmMtaW1wbC1saXN0LmMKKysrIGIvc3lzZGVwcy94ODZfNjQvbXVsdGlh cmNoL2lmdW5jLWltcGwtbGlzdC5jCkBAIC00MDAsMTAgKzQwMCwxMiBAQCBfX2xpYmNfaWZ1bmNf aW1wbF9saXN0IChjb25zdCBjaGFyICpuYW1lLCBzdHJ1Y3QgbGliY19pZnVuY19pbXBsICphcnJh eSwKICAgLyogU3VwcG9ydCBzeXNkZXBzL3g4Nl82NC9tdWx0aWFyY2gvc3RyY2hyLmMuICAqLwog ICBJRlVOQ19JTVBMIChpLCBuYW1lLCBzdHJjaHIsCiAJICAgICAgSUZVTkNfSU1QTF9BREQgKGFy cmF5LCBpLCBzdHJjaHIsCi0JCQkgICAgICBDUFVfRkVBVFVSRV9VU0FCTEUgKEFWWDIpLAorCQkJ ICAgICAgKENQVV9GRUFUVVJFX1VTQUJMRSAoQVZYMikKKwkJCSAgICAgICAmJiBDUFVfRkVBVFVS RV9VU0FCTEUgKEJNSTIpKSwKIAkJCSAgICAgIF9fc3RyY2hyX2F2eDIpCiAJICAgICAgSUZVTkNf SU1QTF9BREQgKGFycmF5LCBpLCBzdHJjaHIsCiAJCQkgICAgICAoQ1BVX0ZFQVRVUkVfVVNBQkxF IChBVlgyKQorCQkJICAgICAgICYmIENQVV9GRUFUVVJFX1VTQUJMRSAoQk1JMikKIAkJCSAgICAg ICAmJiBDUFVfRkVBVFVSRV9VU0FCTEUgKFJUTSkpLAogCQkJICAgICAgX19zdHJjaHJfYXZ4Ml9y dG0pCiAJICAgICAgSUZVTkNfSU1QTF9BREQgKGFycmF5LCBpLCBzdHJjaHIsCkBAIC00MTcsMTAg KzQxOSwxMiBAQCBfX2xpYmNfaWZ1bmNfaW1wbF9saXN0IChjb25zdCBjaGFyICpuYW1lLCBzdHJ1 Y3QgbGliY19pZnVuY19pbXBsICphcnJheSwKICAgLyogU3VwcG9ydCBzeXNkZXBzL3g4Nl82NC9t dWx0aWFyY2gvc3RyY2hybnVsLmMuICAqLwogICBJRlVOQ19JTVBMIChpLCBuYW1lLCBzdHJjaHJu dWwsCiAJICAgICAgSUZVTkNfSU1QTF9BREQgKGFycmF5LCBpLCBzdHJjaHJudWwsCi0JCQkgICAg ICBDUFVfRkVBVFVSRV9VU0FCTEUgKEFWWDIpLAorCQkJICAgICAgKENQVV9GRUFUVVJFX1VTQUJM RSAoQVZYMikKKwkJCSAgICAgICAmJiBDUFVfRkVBVFVSRV9VU0FCTEUgKEJNSTIpKSwKIAkJCSAg ICAgIF9fc3RyY2hybnVsX2F2eDIpCiAJICAgICAgSUZVTkNfSU1QTF9BREQgKGFycmF5LCBpLCBz dHJjaHJudWwsCiAJCQkgICAgICAoQ1BVX0ZFQVRVUkVfVVNBQkxFIChBVlgyKQorCQkJICAgICAg ICYmIENQVV9GRUFUVVJFX1VTQUJMRSAoQk1JMikKIAkJCSAgICAgICAmJiBDUFVfRkVBVFVSRV9V U0FCTEUgKFJUTSkpLAogCQkJICAgICAgX19zdHJjaHJudWxfYXZ4Ml9ydG0pCiAJICAgICAgSUZV TkNfSU1QTF9BREQgKGFycmF5LCBpLCBzdHJjaHJudWwsCkBAIC01NzQsMTAgKzU3OCwxMiBAQCBf X2xpYmNfaWZ1bmNfaW1wbF9saXN0IChjb25zdCBjaGFyICpuYW1lLCBzdHJ1Y3QgbGliY19pZnVu Y19pbXBsICphcnJheSwKICAgLyogU3VwcG9ydCBzeXNkZXBzL3g4Nl82NC9tdWx0aWFyY2gvd2Nz Y2hyLmMuICAqLwogICBJRlVOQ19JTVBMIChpLCBuYW1lLCB3Y3NjaHIsCiAJICAgICAgSUZVTkNf SU1QTF9BREQgKGFycmF5LCBpLCB3Y3NjaHIsCi0JCQkgICAgICBDUFVfRkVBVFVSRV9VU0FCTEUg KEFWWDIpLAorCQkJICAgICAgKENQVV9GRUFUVVJFX1VTQUJMRSAoQVZYMikKKwkJCSAgICAgICAm JiBDUFVfRkVBVFVSRV9VU0FCTEUgKEJNSTIpKSwKIAkJCSAgICAgIF9fd2NzY2hyX2F2eDIpCiAJ ICAgICAgSUZVTkNfSU1QTF9BREQgKGFycmF5LCBpLCB3Y3NjaHIsCiAJCQkgICAgICAoQ1BVX0ZF QVRVUkVfVVNBQkxFIChBVlgyKQorCQkJICAgICAgICYmIENQVV9GRUFUVVJFX1VTQUJMRSAoQk1J MikKIAkJCSAgICAgICAmJiBDUFVfRkVBVFVSRV9VU0FCTEUgKFJUTSkpLAogCQkJICAgICAgX193 Y3NjaHJfYXZ4Ml9ydG0pCiAJICAgICAgSUZVTkNfSU1QTF9BREQgKGFycmF5LCBpLCB3Y3NjaHIs Ci0tIAoyLjM2LjEKCg== --000000000000f0c82305e9bd1b70--