public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
From: Sunil Pandey <skpgkp2@gmail.com>
To: "H.J. Lu" <hjl.tools@gmail.com>,
	 Libc-stable Mailing List <libc-stable@sourceware.org>
Cc: Adhemerval Zanella <adhemerval.zanella@linaro.org>,
	 GNU C Library <libc-alpha@sourceware.org>
Subject: Re: [PATCH v2] x86-64: Optimize bzero
Date: Tue, 3 May 2022 23:35:44 -0700	[thread overview]
Message-ID: <CAMAf5_frWOdpz5js5kNaE9=5bR+_fTP6xaKoZfWUyEXJDJmwpA@mail.gmail.com> (raw)
In-Reply-To: <CAMe9rOot8YEAE1Qvc-LowW-gggfusYzRhcePN4+as1q639dieQ@mail.gmail.com>

On Mon, Feb 14, 2022 at 7:04 AM H.J. Lu via Libc-alpha
<libc-alpha@sourceware.org> wrote:
>
> On Mon, Feb 14, 2022 at 6:07 AM Adhemerval Zanella via Libc-alpha
> <libc-alpha@sourceware.org> wrote:
> >
> >
> >
> > On 14/02/2022 09:41, Noah Goldstein wrote:
> > > On Mon, Feb 14, 2022 at 6:07 AM Adhemerval Zanella
> > > <adhemerval.zanella@linaro.org> wrote:
> > >>
> > >>
> > >>
> > >> On 12/02/2022 20:46, Noah Goldstein wrote:
> > >>> On Fri, Feb 11, 2022 at 7:01 AM Adhemerval Zanella via Libc-alpha
> > >>> <libc-alpha@sourceware.org> wrote:
> > >>>>
> > >>>>
> > >>>>
> > >>>> On 10/02/2022 18:07, Patrick McGehearty via Libc-alpha wrote:
> > >>>>> Just as another point of information, Solaris libc implemented
> > >>>>> bzero as moving arguments around appropriately then jumping to
> > >>>>> memset. Noone noticed enough to file a complaint. Of course,
> > >>>>> short fixed-length bzero was handled with in line stores of zero
> > >>>>> by the compiler. For long vector bzeroing, the overhead was
> > >>>>> negligible.
> > >>>>>
> > >>>>> When certain Sparc hardware implementations provided faster methods
> > >>>>> for zeroing a cache line at a time on cache line boundaries,
> > >>>>> memset added a single test for zero ifandonlyif the length of code
> > >>>>> to memset was over a threshold that seemed likely to make it
> > >>>>> worthwhile to use the faster method. The principal advantage
> > >>>>> of the fast zeroing operation is that it did not require data
> > >>>>> to move from memory to cache before writing zeros to memory,
> > >>>>> protecting cache locality in the face of large block zeroing.
> > >>>>> I was responsible for much of that optimization effort.
> > >>>>> Whether that optimization was really worth it is open for debate
> > >>>>> for a variety of reasons that I won't go into just now.
> > >>>>
> > >>>> Afaik this is pretty much what optimized memset implementations
> > >>>> does, if architecture allows it. For instance, aarch64 uses
> > >>>> 'dc zva' for sizes larger than 256 and powerpc uses dcbz with a
> > >>>> similar strategy.
> > >>>>
> > >>>>>
> > >>>>> Apps still used bzero or memset(target,zero,length) according to
> > >>>>> their preferences, but the code was unified under memset.
> > >>>>>
> > >>>>> I am inclined to agree with keeping bzero in the API for
> > >>>>> compatibility with old code/old binaries/old programmers. :-)
> > >>>>
> > >>>> The main driver to remove the bzero internal implementation is just
> > >>>> the *currently* gcc just do not generate bzero calls as default
> > >>>> (I couldn't find a single binary that calls bzero in my system).
> > >>>
> > >>> Does it make sense then to add '__memsetzero' so that we can have
> > >>> a function optimized for setting zero?
> > >>
> > >> Will it be really a huge gain instead of a microoptimization that will
> > >> just a bunch of more ifunc variants along with the maintenance cost
> > >> associated with this?
> > > Is there any way it can be setup so that one C impl can cover all the
> > > arch that want to just leave `__memsetzero` as an alias to `memset`?
> > > I know they have incompatible interfaces that make it hard but would
> > > a weak static inline in string.h work?
> > >
> > > For some of the shorter control flows (which are generally small sizes
> > > and very hot) we saw reasonable benefits on x86_64.
> > >
> > > The most significant was the EVEX/AVX2 [32, 64] case where it net
> > > us ~25% throughput. This is a pretty hot set value so it may be worth it.
> >
> > With different prototypes and semantics we won't be able to define an
> > alias. What we used to do, but we move away in recent version, was to
> > define static inline function that glue the two function if optimization
> > is set.
>
> I have
>
> /* NB: bzero returns void and __memsetzero returns void *.  */
> asm (".weak bzero");
> asm ("bzero = __memsetzero");
> asm (".global __bzero");
> asm ("__bzero = __memsetzero");
>
> > >
> > >>
> > >> My understanding is __memsetzero would maybe yield some gain in the
> > >> store mask generation (some architecture might have a zero register
> > >> or some instruction to generate one), however it would require to
> > >> use the same strategy as memset to use specific architecture instruction
> > >> that optimize cache utilization (dc zva, dcbz).
> > >>
> > >> So it would mostly require a lot of arch-specific code to to share
> > >> the memset code with __memsetzero (to avoid increasing code size),
> > >> so I am not sure if this is really a gain in the long term.
> > >
> > > It's worth noting that between the two `memset` is the cold function
> > > and `__memsetzero` is the hot one. Based on profiles of GCC11 and
> > > Python3.7.7 setting zero covers 99%+ cases.
> >
> > This is very workload specific and I think with more advance compiler
> > optimization like LTO and PGO such calls could most likely being
> > optimized by the compiler itself (either by inline or by create a
> > synthetic function to handle it).
> >
> > What I worried is such symbols might ended up as the AEBI memcpy variants
> > that was added as way to optimize when alignment is know to be multiple
> > of words, but it ended up not being implemented and also not being generated
> > by the compiler (at least not gcc).
>
>
>
> --
> H.J.

I would like to backport this patch to release branches.
Any comments or objections?

--Sunil

  reply	other threads:[~2022-05-04  6:36 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-08 22:43 H.J. Lu
2022-02-08 23:56 ` Noah Goldstein
2022-02-09 11:41 ` Adhemerval Zanella
2022-02-09 22:14   ` Noah Goldstein
2022-02-10 12:35     ` Adhemerval Zanella
2022-02-10 13:01       ` Wilco Dijkstra
2022-02-10 13:10         ` Adhemerval Zanella
2022-02-10 13:16           ` Adhemerval Zanella
2022-02-10 13:17           ` Wilco Dijkstra
2022-02-10 13:22             ` Adhemerval Zanella
2022-02-10 17:50               ` Alejandro Colomar (man-pages)
2022-02-10 19:19                 ` Wilco Dijkstra
2022-02-10 20:27                   ` Alejandro Colomar (man-pages)
2022-02-10 20:42                     ` Adhemerval Zanella
2022-02-10 21:07                       ` Patrick McGehearty
2022-02-11 13:01                         ` Adhemerval Zanella
2022-02-12 23:46                           ` Noah Goldstein
2022-02-14 12:07                             ` Adhemerval Zanella
2022-02-14 12:41                               ` Noah Goldstein
2022-02-14 14:07                                 ` Adhemerval Zanella
2022-02-14 15:03                                   ` H.J. Lu
2022-05-04  6:35                                     ` Sunil Pandey [this message]
2022-05-04 12:52                                       ` Adhemerval Zanella
2022-05-04 14:50                                         ` H.J. Lu
2022-05-04 14:54                                           ` Adhemerval Zanella
2022-02-10 22:00                       ` Alejandro Colomar (man-pages)
2022-02-10 19:42                 ` Adhemerval Zanella
2022-02-10 18:28         ` Noah Goldstein
2022-02-10 18:35         ` Noah Goldstein
2022-02-15 13:38 Wilco Dijkstra
2022-02-23  8:12 ` Noah Goldstein
2022-02-23 12:09   ` Adhemerval Zanella
2022-02-24 13:16   ` Wilco Dijkstra
2022-02-24 15:48     ` H.J. Lu
2022-02-24 22:58     ` Noah Goldstein
2022-02-24 23:21       ` Noah Goldstein
2022-02-25 17:37         ` Noah Goldstein
2022-02-25 13:51       ` Wilco Dijkstra
2022-02-25 17:35         ` Noah Goldstein

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAMAf5_frWOdpz5js5kNaE9=5bR+_fTP6xaKoZfWUyEXJDJmwpA@mail.gmail.com' \
    --to=skpgkp2@gmail.com \
    --cc=adhemerval.zanella@linaro.org \
    --cc=hjl.tools@gmail.com \
    --cc=libc-alpha@sourceware.org \
    --cc=libc-stable@sourceware.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).