public inbox for libstdc++@gcc.gnu.org
 help / color / mirror / Atom feed
From: "Александр Шитов" <alex.shitov1237@gmail.com>
To: Jonathan Wakely <jwakely.gcc@gmail.com>, libstdc++@gcc.gnu.org
Subject: Re: __lower_bound improvement for arithmetical types
Date: Wed, 29 Mar 2023 10:26:18 +0400	[thread overview]
Message-ID: <CAP=JgFoFYK8B7XNsL-2HxZET-K=kzsVy6xd6hMPUt4wkphc14Q@mail.gmail.com> (raw)
In-Reply-To: <CAP=JgFpBesxfVvcnWqE2ZmGt0s-H+4L6F1bBP2NbJiAnziCE7g@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 2135 bytes --]

Benchmark shows that adding an extra check to determine whether two
iterators are in one cache line requires noticeable CPU time. These checks
outweigh the benefits of searching through one cache line.


Given these facts, I 'd rather stick to the previously proposed version. Do
I have to do change the code somehow so that it could be merged to
libstdc++ ?

сб, 25 мар. 2023 г. в 14:16, Александр Шитов <alex.shitov1237@gmail.com>:

> Benchmark shows that adding an extra check to determine whether two
> iterators are in one cache line requires noticeable CPU time. These checks
> outweigh the benefits of searching through one cache line.
>
>
> Given these facts, I 'd rather stick to the previously proposed version.
> Do I have to do change the code somehow so that it could be merged to
> libstdc++ ?
>
> Пт, 10 марта 2023 г. в 14:27, Jonathan Wakely <jwakely.gcc@gmail.com>:
>
>> On Thu, 9 Mar 2023 at 19:58, Александр Шитов via Libstdc++
>> <libstdc++@gcc.gnu.org> wrote:
>> >
>> > I want to propose an improvement to std::__lower_bound for arithmetic
>> types
>> > with the standard comparators.
>> >
>> >
>> > The main idea is to use linear search on a small number of elements to
>> aid
>> > the branch predictor and CPU caches, but only when it is not observable
>> by
>> > the user. In other words, if a standard comparator (std::less,
>> > std::greater) is used for arithmetic types.
>> >
>> >
>> > In benchmarks I achieved twice the increase in speed for small
>> vectors(16
>> > elements) and increase for 10-20% in large vectors(1'000, 100'000
>> elements).
>> >
>> >
>> > Code: https://gist.github.com/ATGsan/8a1fdec92371d5778a65b01321c43604
>> >
>> > PR: https://github.com/ATGsan/gcc/pull/1
>>
>> This is an interesting idea, thanks.
>>
>> You limit the linear search to a single cacheline, but you don't
>> ensure that the range to search doesn't cross two cachelines, right?
>> Maybe it doesn't matter in practice, but I wonder if limiting the
>> linear search to a single cacheline would be even better.
>>
>

  parent reply	other threads:[~2023-03-29  6:26 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-09 19:57 Александр Шитов
2023-03-10 10:26 ` Jonathan Wakely
     [not found]   ` <CAP=JgFpBesxfVvcnWqE2ZmGt0s-H+4L6F1bBP2NbJiAnziCE7g@mail.gmail.com>
2023-03-29  6:26     ` Александр Шитов [this message]
2023-03-29  8:00       ` Jonathan Wakely
2023-05-13 12:02         ` Александр Шитов

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAP=JgFoFYK8B7XNsL-2HxZET-K=kzsVy6xd6hMPUt4wkphc14Q@mail.gmail.com' \
    --to=alex.shitov1237@gmail.com \
    --cc=jwakely.gcc@gmail.com \
    --cc=libstdc++@gcc.gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).