From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-vk1-xa2f.google.com (mail-vk1-xa2f.google.com [IPv6:2607:f8b0:4864:20::a2f]) by sourceware.org (Postfix) with ESMTPS id D2FEE385742F; Wed, 4 May 2022 06:36:20 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org D2FEE385742F Received: by mail-vk1-xa2f.google.com with SMTP id b81so176430vkf.1; Tue, 03 May 2022 23:36:20 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=NE8jSPaztMOrUJgooxi5S3Z9gFwiRjGMjILxMR7bYwM=; b=O/dU8e3nIEGgeLk50/NMxoELZebEZ9bay5KdJ5we/4yjF8+XsRIiDjBdW5U+hTQ50+ B06KUxIFpiEWprsTMCVn/dOjNRb6cMPXcwmtpzxhtF2XnCJWPM0c0JPEYTbrTEPQPA8t S2P+OOjuaDbmPONv/k0ECE2rN6z4JkZ4S1Ha3psbI9xr1fBIaA6cM0HkLfIfd4f/dKm3 /gE5u/5OddYuxnEF2bfokQOj33/Hyb8j0gWq+2x7pZzCQY+e7tscIFfK9Qm387XzTRsw wnSIATMeQFUOzt5rwu32O4OfUSrJ0UiXBBmYJ5cefoNAUfG1pTV6R4GxZNeswhuf5MbE 7fzA== X-Gm-Message-State: AOAM530O16TmGoo9xr+Ay4Z3o79NCilmXcLtiagM6Q1IIUu+pu7bnr3I KtMfCxl/GL2Ma/H4iabnCUFLJ8mlHAgPkKdI+Z+W47Q6Lvo= X-Google-Smtp-Source: ABdhPJyEK3s0mMm9DbKeC9Ib7FOAkIQmW47O2sC/m3xhQ/u7cgW73/KUdPQ/G43JIJFjDX4IBbT3eIPbbm1SUesO9o8= X-Received: by 2002:a1f:c685:0:b0:349:9353:78dc with SMTP id w127-20020a1fc685000000b00349935378dcmr5704119vkf.11.1651646180228; Tue, 03 May 2022 23:36:20 -0700 (PDT) MIME-Version: 1.0 References: <20220208224319.40271-1-hjl.tools@gmail.com> <1f75bda3-9e89-6860-a042-ef0406b072c1@linaro.org> <78cdba88-9e00-798a-846b-f0f77559bfd5@gmail.com> <0efdd4fe-4e35-cf1d-5731-13ed1c046cc6@oracle.com> <1ea64f9f-6ce8-5409-8b56-02f7481526d9@linaro.org> <1f5d5e63-f79b-9fc6-0f35-77d4abed7480@linaro.org> In-Reply-To: From: Sunil Pandey Date: Tue, 3 May 2022 23:35:44 -0700 Message-ID: Subject: Re: [PATCH v2] x86-64: Optimize bzero To: "H.J. Lu" , Libc-stable Mailing List Cc: Adhemerval Zanella , GNU C Library Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-1.5 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_ENVFROM_END_DIGIT, FREEMAIL_FROM, HK_RANDOM_ENVFROM, HK_RANDOM_FROM, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-stable@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-stable mailing list List-Unsubscribe: , List-Archive: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 04 May 2022 06:36:23 -0000 On Mon, Feb 14, 2022 at 7:04 AM H.J. Lu via Libc-alpha wrote: > > On Mon, Feb 14, 2022 at 6:07 AM Adhemerval Zanella via Libc-alpha > wrote: > > > > > > > > On 14/02/2022 09:41, Noah Goldstein wrote: > > > On Mon, Feb 14, 2022 at 6:07 AM Adhemerval Zanella > > > wrote: > > >> > > >> > > >> > > >> On 12/02/2022 20:46, Noah Goldstein wrote: > > >>> On Fri, Feb 11, 2022 at 7:01 AM Adhemerval Zanella via Libc-alpha > > >>> wrote: > > >>>> > > >>>> > > >>>> > > >>>> On 10/02/2022 18:07, Patrick McGehearty via Libc-alpha wrote: > > >>>>> Just as another point of information, Solaris libc implemented > > >>>>> bzero as moving arguments around appropriately then jumping to > > >>>>> memset. Noone noticed enough to file a complaint. Of course, > > >>>>> short fixed-length bzero was handled with in line stores of zero > > >>>>> by the compiler. For long vector bzeroing, the overhead was > > >>>>> negligible. > > >>>>> > > >>>>> When certain Sparc hardware implementations provided faster methods > > >>>>> for zeroing a cache line at a time on cache line boundaries, > > >>>>> memset added a single test for zero ifandonlyif the length of code > > >>>>> to memset was over a threshold that seemed likely to make it > > >>>>> worthwhile to use the faster method. The principal advantage > > >>>>> of the fast zeroing operation is that it did not require data > > >>>>> to move from memory to cache before writing zeros to memory, > > >>>>> protecting cache locality in the face of large block zeroing. > > >>>>> I was responsible for much of that optimization effort. > > >>>>> Whether that optimization was really worth it is open for debate > > >>>>> for a variety of reasons that I won't go into just now. > > >>>> > > >>>> Afaik this is pretty much what optimized memset implementations > > >>>> does, if architecture allows it. For instance, aarch64 uses > > >>>> 'dc zva' for sizes larger than 256 and powerpc uses dcbz with a > > >>>> similar strategy. > > >>>> > > >>>>> > > >>>>> Apps still used bzero or memset(target,zero,length) according to > > >>>>> their preferences, but the code was unified under memset. > > >>>>> > > >>>>> I am inclined to agree with keeping bzero in the API for > > >>>>> compatibility with old code/old binaries/old programmers. :-) > > >>>> > > >>>> The main driver to remove the bzero internal implementation is just > > >>>> the *currently* gcc just do not generate bzero calls as default > > >>>> (I couldn't find a single binary that calls bzero in my system). > > >>> > > >>> Does it make sense then to add '__memsetzero' so that we can have > > >>> a function optimized for setting zero? > > >> > > >> Will it be really a huge gain instead of a microoptimization that will > > >> just a bunch of more ifunc variants along with the maintenance cost > > >> associated with this? > > > Is there any way it can be setup so that one C impl can cover all the > > > arch that want to just leave `__memsetzero` as an alias to `memset`? > > > I know they have incompatible interfaces that make it hard but would > > > a weak static inline in string.h work? > > > > > > For some of the shorter control flows (which are generally small sizes > > > and very hot) we saw reasonable benefits on x86_64. > > > > > > The most significant was the EVEX/AVX2 [32, 64] case where it net > > > us ~25% throughput. This is a pretty hot set value so it may be worth it. > > > > With different prototypes and semantics we won't be able to define an > > alias. What we used to do, but we move away in recent version, was to > > define static inline function that glue the two function if optimization > > is set. > > I have > > /* NB: bzero returns void and __memsetzero returns void *. */ > asm (".weak bzero"); > asm ("bzero = __memsetzero"); > asm (".global __bzero"); > asm ("__bzero = __memsetzero"); > > > > > > >> > > >> My understanding is __memsetzero would maybe yield some gain in the > > >> store mask generation (some architecture might have a zero register > > >> or some instruction to generate one), however it would require to > > >> use the same strategy as memset to use specific architecture instruction > > >> that optimize cache utilization (dc zva, dcbz). > > >> > > >> So it would mostly require a lot of arch-specific code to to share > > >> the memset code with __memsetzero (to avoid increasing code size), > > >> so I am not sure if this is really a gain in the long term. > > > > > > It's worth noting that between the two `memset` is the cold function > > > and `__memsetzero` is the hot one. Based on profiles of GCC11 and > > > Python3.7.7 setting zero covers 99%+ cases. > > > > This is very workload specific and I think with more advance compiler > > optimization like LTO and PGO such calls could most likely being > > optimized by the compiler itself (either by inline or by create a > > synthetic function to handle it). > > > > What I worried is such symbols might ended up as the AEBI memcpy variants > > that was added as way to optimize when alignment is know to be multiple > > of words, but it ended up not being implemented and also not being generated > > by the compiler (at least not gcc). > > > > -- > H.J. I would like to backport this patch to release branches. Any comments or objections? --Sunil