public inbox for libc-help@sourceware.org
 help / color / mirror / Atom feed
* Yield to specific thread?
@ 2021-05-20 10:42 Alexandre Bique
  2021-05-20 11:02 ` Florian Weimer
  2021-05-20 12:27 ` Godmar Back
  0 siblings, 2 replies; 9+ messages in thread
From: Alexandre Bique @ 2021-05-20 10:42 UTC (permalink / raw)
  To: libc-help

Hi,

I'm working on a soft real-time problem, for audio processing.

We have 2 processes A and B.
A produces a request, B processes it and produces a response, A consumes it.

The whole thing is synchronous so it looks like
   result = process(request);
except that we execute the process function in another process for
isolation purposes.

Right now, we put everything into SHM, and A writes a lightweight request
into a pipe which B is blocking on using read(); and another pipe to notify
A about the result.

On the papers this approach works but in practice we are facing scheduling
issues.
I read some of the Linux kernel and if I understood correctly the kernel
does not schedule right away the threads waiting on I/O but put them in a
wake up queue, and they will be woken up by the scheduler on the next
scheduler_tick() which depends on the scheduler tick frequency. On a low
latency kernel the frequency is about 1000Hz which is not too bad, but on
most desktops it is lower than that, and it produces jitter.

I found that locking a mutex may schedule the mutex owner immediately.

Ideally I'd like to do:
A produces a request
A sched_yield_to(B)
B processes the request
B sched_yield_to(A)

In a way the execution of A and B is exclusive, we achieve it by waiting on
pipe read.
I did not find a good design to do it using mutexes and I am looking for
help.

I tried a design with two mutexes but it will race and deadlock. I can
share the sequence diagram if you want.

Thank you very much,
Alexandre Bique

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Yield to specific thread?
  2021-05-20 10:42 Yield to specific thread? Alexandre Bique
@ 2021-05-20 11:02 ` Florian Weimer
  2021-05-20 11:09   ` Alexandre Bique
  2021-05-20 12:27 ` Godmar Back
  1 sibling, 1 reply; 9+ messages in thread
From: Florian Weimer @ 2021-05-20 11:02 UTC (permalink / raw)
  To: Alexandre Bique via Libc-help

* Alexandre Bique via Libc-help:

> Ideally I'd like to do:
> A produces a request
> A sched_yield_to(B)
> B processes the request
> B sched_yield_to(A)

This looks like an application for a condition variable or perhaps a
barrier.  If there is just a single writer, the kernel should wake up
the desired thread.

Thanks,
Florian


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Yield to specific thread?
  2021-05-20 11:02 ` Florian Weimer
@ 2021-05-20 11:09   ` Alexandre Bique
  2021-05-20 11:20     ` Konstantin Kharlamov
  0 siblings, 1 reply; 9+ messages in thread
From: Alexandre Bique @ 2021-05-20 11:09 UTC (permalink / raw)
  To: Florian Weimer; +Cc: Alexandre Bique via Libc-help

On Thu, May 20, 2021 at 1:03 PM Florian Weimer <fweimer@redhat.com> wrote:
>
> * Alexandre Bique via Libc-help:
>
> > Ideally I'd like to do:
> > A produces a request
> > A sched_yield_to(B)
> > B processes the request
> > B sched_yield_to(A)
>
> This looks like an application for a condition variable or perhaps a
> barrier.  If there is just a single writer, the kernel should wake up
> the desired thread.

I don't think conditions or barriers would solve the problem. Because
they would just put the waiting threads on the wake up queue like the
read() on the pipe would.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Yield to specific thread?
  2021-05-20 11:09   ` Alexandre Bique
@ 2021-05-20 11:20     ` Konstantin Kharlamov
  2021-05-20 11:54       ` Alexandre Bique
  0 siblings, 1 reply; 9+ messages in thread
From: Konstantin Kharlamov @ 2021-05-20 11:20 UTC (permalink / raw)
  To: Alexandre Bique, Florian Weimer; +Cc: Alexandre Bique via Libc-help

On Thu, 2021-05-20 at 13:09 +0200, Alexandre Bique via Libc-help wrote:
> On Thu, May 20, 2021 at 1:03 PM Florian Weimer <fweimer@redhat.com> wrote:
> > 
> > * Alexandre Bique via Libc-help:
> > 
> > > Ideally I'd like to do:
> > > A produces a request
> > > A sched_yield_to(B)
> > > B processes the request
> > > B sched_yield_to(A)
> > 
> > This looks like an application for a condition variable or perhaps a
> > barrier.  If there is just a single writer, the kernel should wake up
> > the desired thread.
> 
> I don't think conditions or barriers would solve the problem. Because
> they would just put the waiting threads on the wake up queue like the
> read() on the pipe would.

I assume it should work. I remember Torvalds ranting about people using sched_yield() for the wrong reasons¹, and he mentioned mutex (which apparently worked for you) as one of possible solutions. Quoting:

> Good locking simply needs to be more directed than what "sched_yield()" can ever give you outside of a UP system without caches. It needs to actively tell the system what you're yielding to (and optimally it would also tell the system about whether you care about fairness/latency or not - a lot of loads don't).
>
> But that's not "sched_yield()" - that's something different. It's generally something like std::mutex, pthread_mutex_lock(), or perhaps a tuned thing that uses an OS-specific facility like "futex", where you do the nonblocking (and non-contended) case in user space using a shared memory location, but when you get contention you tell the OS what you're waiting for (and what you're waking up).


1: https://www.realworldtech.com/forum/?threadid=189711&curpostid=189752


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Yield to specific thread?
  2021-05-20 11:20     ` Konstantin Kharlamov
@ 2021-05-20 11:54       ` Alexandre Bique
  2021-05-25 18:21         ` Adhemerval Zanella
  0 siblings, 1 reply; 9+ messages in thread
From: Alexandre Bique @ 2021-05-20 11:54 UTC (permalink / raw)
  To: Konstantin Kharlamov; +Cc: Florian Weimer, Alexandre Bique via Libc-help

Oh I think I fixed it using 3 mutexes.
Alexandre Bique

On Thu, May 20, 2021 at 1:20 PM Konstantin Kharlamov <hi-angel@yandex.ru> wrote:
>
> On Thu, 2021-05-20 at 13:09 +0200, Alexandre Bique via Libc-help wrote:
> > On Thu, May 20, 2021 at 1:03 PM Florian Weimer <fweimer@redhat.com> wrote:
> > >
> > > * Alexandre Bique via Libc-help:
> > >
> > > > Ideally I'd like to do:
> > > > A produces a request
> > > > A sched_yield_to(B)
> > > > B processes the request
> > > > B sched_yield_to(A)
> > >
> > > This looks like an application for a condition variable or perhaps a
> > > barrier.  If there is just a single writer, the kernel should wake up
> > > the desired thread.
> >
> > I don't think conditions or barriers would solve the problem. Because
> > they would just put the waiting threads on the wake up queue like the
> > read() on the pipe would.
>
> I assume it should work. I remember Torvalds ranting about people using sched_yield() for the wrong reasons¹, and he mentioned mutex (which apparently worked for you) as one of possible solutions. Quoting:
>
> > Good locking simply needs to be more directed than what "sched_yield()" can ever give you outside of a UP system without caches. It needs to actively tell the system what you're yielding to (and optimally it would also tell the system about whether you care about fairness/latency or not - a lot of loads don't).
> >
> > But that's not "sched_yield()" - that's something different. It's generally something like std::mutex, pthread_mutex_lock(), or perhaps a tuned thing that uses an OS-specific facility like "futex", where you do the nonblocking (and non-contended) case in user space using a shared memory location, but when you get contention you tell the OS what you're waiting for (and what you're waking up).
>
>
> 1: https://www.realworldtech.com/forum/?threadid=189711&curpostid=189752
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Yield to specific thread?
  2021-05-20 10:42 Yield to specific thread? Alexandre Bique
  2021-05-20 11:02 ` Florian Weimer
@ 2021-05-20 12:27 ` Godmar Back
  1 sibling, 0 replies; 9+ messages in thread
From: Godmar Back @ 2021-05-20 12:27 UTC (permalink / raw)
  To: Alexandre Bique; +Cc: William Tambe via Libc-help

On Thu, May 20, 2021 at 7:42 AM Alexandre Bique via Libc-help
<libc-help@sourceware.org> wrote:
>
> Hi,
>
> I'm working on a soft real-time problem, for audio processing.
>
> We have 2 processes A and B.
> A produces a request, B processes it and produces a response, A consumes it.
>
> The whole thing is synchronous so it looks like
>    result = process(request);
> except that we execute the process function in another process for
> isolation purposes.
>
> Right now, we put everything into SHM, and A writes a lightweight request
> into a pipe which B is blocking on using read(); and another pipe to notify
> A about the result.
>
> On the papers this approach works but in practice we are facing scheduling
> issues.
> I read some of the Linux kernel and if I understood correctly the kernel
> does not schedule right away the threads waiting on I/O but put them in a
> wake up queue, and they will be woken up by the scheduler on the next
> scheduler_tick() which depends on the scheduler tick frequency. On a low
> latency kernel the frequency is about 1000Hz which is not too bad, but on
> most desktops it is lower than that, and it produces jitter.
>

I don't think that's correct.  If a task is woken up, it can be
scheduled right away.
For instance, say you have a 2-core machine, with 2 tasks running,
task A assigned
to core 1, task B assigned to core 2, a woken-up task will run
immediately (if you ignore
the time it tasks for the necessary IPI).

If tasks A and B are on the same CPU/core, it may or may not be run immediately.
Under CFS, if its virtual runtime is smallest (that is, it has the
highest priority), it would preempt.
But more likely in this case is this (assuming A and B are the only
tasks on the CPU/core):
A signals B, which does not preempt A, and soon after A will do
something that makes it
block - at that point, B is scheduled on that core.

But in the absence of other ready-to-run tasks, Linux will not lay
idle and wait for a scheduler_tick
to schedule a task that has just become ready - that would be
considered a kernel bug.

Also, if you must have thread B run once woken up, no matter what, put
it in a higher
scheduling class (SCHED_FIFO or SCHED_RR).

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Yield to specific thread?
  2021-05-20 11:54       ` Alexandre Bique
@ 2021-05-25 18:21         ` Adhemerval Zanella
  2021-05-26  9:36           ` Tadeus Prastowo
  2021-05-26 14:09           ` Alexandre Bique
  0 siblings, 2 replies; 9+ messages in thread
From: Adhemerval Zanella @ 2021-05-25 18:21 UTC (permalink / raw)
  To: Alexandre Bique, Konstantin Kharlamov
  Cc: Florian Weimer, Alexandre Bique via Libc-help

I think you will need a conditional variable with a priority support.
Unfortunately POSIX requirements makes hard to provide it on glibc,

There is a project that aims to provide it [1] and I think it would
fit better in the scenarios you described: you setup a conditional
variable on a shared memory between the two processes A and B, you
setup B with higher priority than A, and when A produces a request
the condvar wakeup event will wake the highest priority waiters
(in the case B).

This library uses the FUTEX_WAIT_REQUEUE_PI futex operations with a
different (and I think non-POSIX conformant) conditional variable
implementation.

[1] https://github.com/dvhart/librtpi 

On 20/05/2021 08:54, Alexandre Bique via Libc-help wrote:
> Oh I think I fixed it using 3 mutexes.
> Alexandre Bique
> 
> On Thu, May 20, 2021 at 1:20 PM Konstantin Kharlamov <hi-angel@yandex.ru> wrote:
>>
>> On Thu, 2021-05-20 at 13:09 +0200, Alexandre Bique via Libc-help wrote:
>>> On Thu, May 20, 2021 at 1:03 PM Florian Weimer <fweimer@redhat.com> wrote:
>>>>
>>>> * Alexandre Bique via Libc-help:
>>>>
>>>>> Ideally I'd like to do:
>>>>> A produces a request
>>>>> A sched_yield_to(B)
>>>>> B processes the request
>>>>> B sched_yield_to(A)
>>>>
>>>> This looks like an application for a condition variable or perhaps a
>>>> barrier.  If there is just a single writer, the kernel should wake up
>>>> the desired thread.
>>>
>>> I don't think conditions or barriers would solve the problem. Because
>>> they would just put the waiting threads on the wake up queue like the
>>> read() on the pipe would.
>>
>> I assume it should work. I remember Torvalds ranting about people using sched_yield() for the wrong reasons¹, and he mentioned mutex (which apparently worked for you) as one of possible solutions. Quoting:
>>
>>> Good locking simply needs to be more directed than what "sched_yield()" can ever give you outside of a UP system without caches. It needs to actively tell the system what you're yielding to (and optimally it would also tell the system about whether you care about fairness/latency or not - a lot of loads don't).
>>>
>>> But that's not "sched_yield()" - that's something different. It's generally something like std::mutex, pthread_mutex_lock(), or perhaps a tuned thing that uses an OS-specific facility like "futex", where you do the nonblocking (and non-contended) case in user space using a shared memory location, but when you get contention you tell the OS what you're waiting for (and what you're waking up).
>>
>>
>> 1: https://www.realworldtech.com/forum/?threadid=189711&curpostid=189752
>>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Yield to specific thread?
  2021-05-25 18:21         ` Adhemerval Zanella
@ 2021-05-26  9:36           ` Tadeus Prastowo
  2021-05-26 14:09           ` Alexandre Bique
  1 sibling, 0 replies; 9+ messages in thread
From: Tadeus Prastowo @ 2021-05-26  9:36 UTC (permalink / raw)
  To: Adhemerval Zanella
  Cc: Alexandre Bique, Konstantin Kharlamov, Florian Weimer,
	Alexandre Bique via Libc-help

[-- Attachment #1: Type: text/plain, Size: 3286 bytes --]

Based on my observation with the attached C program (please read the
comments for usage instruction and note that it requires root
privilege to use the POSIX real-time scheduling policy) when compiled
by defining or not defining the macro UNICORE, the priority support
works with glibc at least in Ubuntu 16.04 provided that the process is
not allowed to migrate to another processor core.

-- 
Best regards,
Tadeus

On Tue, May 25, 2021 at 8:58 PM Adhemerval Zanella via Libc-help
<libc-help@sourceware.org> wrote:
>
> I think you will need a conditional variable with a priority support.
> Unfortunately POSIX requirements makes hard to provide it on glibc,
>
> There is a project that aims to provide it [1] and I think it would
> fit better in the scenarios you described: you setup a conditional
> variable on a shared memory between the two processes A and B, you
> setup B with higher priority than A, and when A produces a request
> the condvar wakeup event will wake the highest priority waiters
> (in the case B).
>
> This library uses the FUTEX_WAIT_REQUEUE_PI futex operations with a
> different (and I think non-POSIX conformant) conditional variable
> implementation.
>
> [1] https://github.com/dvhart/librtpi
>
> On 20/05/2021 08:54, Alexandre Bique via Libc-help wrote:
> > Oh I think I fixed it using 3 mutexes.
> > Alexandre Bique
> >
> > On Thu, May 20, 2021 at 1:20 PM Konstantin Kharlamov <hi-angel@yandex.ru> wrote:
> >>
> >> On Thu, 2021-05-20 at 13:09 +0200, Alexandre Bique via Libc-help wrote:
> >>> On Thu, May 20, 2021 at 1:03 PM Florian Weimer <fweimer@redhat.com> wrote:
> >>>>
> >>>> * Alexandre Bique via Libc-help:
> >>>>
> >>>>> Ideally I'd like to do:
> >>>>> A produces a request
> >>>>> A sched_yield_to(B)
> >>>>> B processes the request
> >>>>> B sched_yield_to(A)
> >>>>
> >>>> This looks like an application for a condition variable or perhaps a
> >>>> barrier.  If there is just a single writer, the kernel should wake up
> >>>> the desired thread.
> >>>
> >>> I don't think conditions or barriers would solve the problem. Because
> >>> they would just put the waiting threads on the wake up queue like the
> >>> read() on the pipe would.
> >>
> >> I assume it should work. I remember Torvalds ranting about people using sched_yield() for the wrong reasons¹, and he mentioned mutex (which apparently worked for you) as one of possible solutions. Quoting:
> >>
> >>> Good locking simply needs to be more directed than what "sched_yield()" can ever give you outside of a UP system without caches. It needs to actively tell the system what you're yielding to (and optimally it would also tell the system about whether you care about fairness/latency or not - a lot of loads don't).
> >>>
> >>> But that's not "sched_yield()" - that's something different. It's generally something like std::mutex, pthread_mutex_lock(), or perhaps a tuned thing that uses an OS-specific facility like "futex", where you do the nonblocking (and non-contended) case in user space using a shared memory location, but when you get contention you tell the OS what you're waiting for (and what you're waking up).
> >>
> >>
> >> 1: https://www.realworldtech.com/forum/?threadid=189711&curpostid=189752
> >>

[-- Attachment #2: multicore-condvar.c --]
[-- Type: text/x-csrc, Size: 4172 bytes --]

#ifdef UNICORE
/* _GNU_SOURCE must be defined to use sched_setaffinity() */
#define _GNU_SOURCE
#endif

#include <stdlib.h>
#include <pthread.h>
#include <stdio.h>
#include <sched.h>
#include <string.h>

struct thread_data
{
  int *shared_data;
  pthread_mutex_t *mutex;
  pthread_cond_t *condvar;
  pthread_cond_t *done;
  int *ready_count;
  int value_to_write;
  pthread_t tid;
};

static void *job_body(void *arg)
{
  struct thread_data *thread_data = arg;

  pthread_mutex_lock(thread_data->mutex);
  ++*thread_data->ready_count;
  pthread_cond_wait(thread_data->condvar, thread_data->mutex);
  *thread_data->shared_data = thread_data->value_to_write;
  pthread_cond_signal(thread_data->done);
  pthread_mutex_unlock(thread_data->mutex);

  return NULL;
}

int main(int argc, char *argv[])
{
  int shared_data, old_shared_data, ready_count;
  pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
  pthread_cond_t condvar = PTHREAD_COND_INITIALIZER;
  pthread_cond_t done = PTHREAD_COND_INITIALIZER;
  struct thread_data *thread_data;
  int thread_data_size;
  struct sched_param schedprm = {0};
  pthread_attr_t thread_attr;
#ifdef UNICORE
  cpu_set_t cores;
  CPU_ZERO(&cores);
  CPU_SET(0, &cores);
  sched_setaffinity(0, sizeof(cores), &cores);
#endif

  /* Create the data of every non-main thread */
  thread_data_size = (sched_get_priority_max(SCHED_FIFO)
                      - sched_get_priority_min(SCHED_FIFO) + 1);
  thread_data = malloc(sizeof(*thread_data) * thread_data_size);
  if (!thread_data) {
    fprintf(stderr, "Error: Cannot allocate memory to hold thread data\n");
    return -1;
  }
  for (int i = 0; i < thread_data_size; ++i) {
    thread_data[i].shared_data = &shared_data;
    thread_data[i].mutex = &mutex;
    thread_data[i].condvar = &condvar;
    thread_data[i].done = &done;
    thread_data[i].ready_count = &ready_count;
    thread_data[i].value_to_write = 1 + i;
  }
  fprintf(stderr, "As many as %d threads will be created\n", thread_data_size);

  /* Prepare the attributes shared by every non-main thread */
  pthread_attr_init(&thread_attr);
  pthread_attr_setinheritsched(&thread_attr, PTHREAD_EXPLICIT_SCHED);
  pthread_attr_setschedpolicy(&thread_attr, SCHED_FIFO);

  for (int k = 0; k < 1000; ++k) {
    ready_count = 0;
    
    /* Create threads such that a thread_i has priority
       sched_get_priority_max(SCHED_FIFO) - i */
    schedprm.sched_priority = sched_get_priority_max(SCHED_FIFO);
    for (int i = 0; i < thread_data_size; ++i) {
      schedprm.sched_priority -= i;
      pthread_attr_setschedparam(&thread_attr, &schedprm);
      int rc;
      if (rc = pthread_create(&thread_data[i].tid, &thread_attr,
                              job_body, thread_data + i)) {
        fprintf(stderr, "Error: Cannot create thread #%d: %s",
                i + 1, strerror(rc));
        free(thread_data);
        return -1;
      }
    }

    /* Wait until every non-main thread has waited on the condvar */
    pthread_mutex_lock(&mutex);
    while (ready_count != thread_data_size) {
      pthread_mutex_unlock(&mutex);
      pthread_mutex_lock(&mutex);
    }

    /* Release just one thread from the condvar's queue */
    pthread_cond_signal(&condvar);

    /* Let's see now which thread gets dequeued */
    pthread_cond_wait(&done, &mutex);

    /* If the next owner of the condition variable's mutex is determined by the
       scheduling policy and parameter, the value of shared_data will never
       change.  And, the value is none other than one because the thread with
       the highest SCHED_FIFO priority will run first. */
    if (k) {
      if (shared_data != old_shared_data) {
        old_shared_data = shared_data;
        fprintf(stderr, "shared_data is now %d (k = %d)\n", shared_data, k);
      }
    } else {
      fprintf(stderr, "shared_data is %d (k = %d)\n", shared_data, k);
      old_shared_data = shared_data;
    }

    /* Let every other non-main thread terminate */
    pthread_cond_broadcast(&condvar);
    pthread_mutex_unlock(&mutex);
    for (int i = 0; i < thread_data_size; ++i) {
      pthread_join(thread_data[i].tid, NULL);
    }
    
  }

  free(thread_data);
  return 0;

}

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Yield to specific thread?
  2021-05-25 18:21         ` Adhemerval Zanella
  2021-05-26  9:36           ` Tadeus Prastowo
@ 2021-05-26 14:09           ` Alexandre Bique
  1 sibling, 0 replies; 9+ messages in thread
From: Alexandre Bique @ 2021-05-26 14:09 UTC (permalink / raw)
  To: Adhemerval Zanella
  Cc: Konstantin Kharlamov, Florian Weimer, Alexandre Bique via Libc-help

Thank you, this is very interesting!
Alexandre Bique

On Tue, May 25, 2021 at 8:21 PM Adhemerval Zanella
<adhemerval.zanella@linaro.org> wrote:
>
> I think you will need a conditional variable with a priority support.
> Unfortunately POSIX requirements makes hard to provide it on glibc,
>
> There is a project that aims to provide it [1] and I think it would
> fit better in the scenarios you described: you setup a conditional
> variable on a shared memory between the two processes A and B, you
> setup B with higher priority than A, and when A produces a request
> the condvar wakeup event will wake the highest priority waiters
> (in the case B).
>
> This library uses the FUTEX_WAIT_REQUEUE_PI futex operations with a
> different (and I think non-POSIX conformant) conditional variable
> implementation.
>
> [1] https://github.com/dvhart/librtpi
>
> On 20/05/2021 08:54, Alexandre Bique via Libc-help wrote:
> > Oh I think I fixed it using 3 mutexes.
> > Alexandre Bique
> >
> > On Thu, May 20, 2021 at 1:20 PM Konstantin Kharlamov <hi-angel@yandex.ru> wrote:
> >>
> >> On Thu, 2021-05-20 at 13:09 +0200, Alexandre Bique via Libc-help wrote:
> >>> On Thu, May 20, 2021 at 1:03 PM Florian Weimer <fweimer@redhat.com> wrote:
> >>>>
> >>>> * Alexandre Bique via Libc-help:
> >>>>
> >>>>> Ideally I'd like to do:
> >>>>> A produces a request
> >>>>> A sched_yield_to(B)
> >>>>> B processes the request
> >>>>> B sched_yield_to(A)
> >>>>
> >>>> This looks like an application for a condition variable or perhaps a
> >>>> barrier.  If there is just a single writer, the kernel should wake up
> >>>> the desired thread.
> >>>
> >>> I don't think conditions or barriers would solve the problem. Because
> >>> they would just put the waiting threads on the wake up queue like the
> >>> read() on the pipe would.
> >>
> >> I assume it should work. I remember Torvalds ranting about people using sched_yield() for the wrong reasons¹, and he mentioned mutex (which apparently worked for you) as one of possible solutions. Quoting:
> >>
> >>> Good locking simply needs to be more directed than what "sched_yield()" can ever give you outside of a UP system without caches. It needs to actively tell the system what you're yielding to (and optimally it would also tell the system about whether you care about fairness/latency or not - a lot of loads don't).
> >>>
> >>> But that's not "sched_yield()" - that's something different. It's generally something like std::mutex, pthread_mutex_lock(), or perhaps a tuned thing that uses an OS-specific facility like "futex", where you do the nonblocking (and non-contended) case in user space using a shared memory location, but when you get contention you tell the OS what you're waiting for (and what you're waking up).
> >>
> >>
> >> 1: https://www.realworldtech.com/forum/?threadid=189711&curpostid=189752
> >>

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2021-05-26 14:09 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-20 10:42 Yield to specific thread? Alexandre Bique
2021-05-20 11:02 ` Florian Weimer
2021-05-20 11:09   ` Alexandre Bique
2021-05-20 11:20     ` Konstantin Kharlamov
2021-05-20 11:54       ` Alexandre Bique
2021-05-25 18:21         ` Adhemerval Zanella
2021-05-26  9:36           ` Tadeus Prastowo
2021-05-26 14:09           ` Alexandre Bique
2021-05-20 12:27 ` Godmar Back

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).