public inbox for glibc-bugs@sourceware.org
help / color / mirror / Atom feed
From: "adhemerval.zanella at linaro dot org" <sourceware-bugzilla@sourceware.org>
To: glibc-bugs@sourceware.org
Subject: [Bug libc/30558] SIGEV_THREAD is badly implemented
Date: Mon, 19 Jun 2023 17:41:59 +0000	[thread overview]
Message-ID: <bug-30558-131-LD53u5oI2J@http.sourceware.org/bugzilla/> (raw)
In-Reply-To: <bug-30558-131@http.sourceware.org/bugzilla/>

https://sourceware.org/bugzilla/show_bug.cgi?id=30558

--- Comment #7 from Adhemerval Zanella <adhemerval.zanella at linaro dot org> ---
(In reply to Stas Sergeev from comment #6)
> If I could make a wishes here,
> I'd say please implement SIGEV_THREAD
> on top of timerfd API.
> That way you avoid messing around
> signals, and can get a consistent
> timer_getoverrun().

I don't think this a promising approach for POSIX timers, besides adding an
extra file descriptor under the hoods (with might interfere with
RLIMIT_NOFILE), the timerfd inherits over fork which would require keeping
track of the user file descriptor, and adding an atfork handler to close them.
We would also need to synthesize a sigval argument, which most likely would
require even extra state that kernel already provides us for free.

I created an RFC implementation to avoid the thread creation per timer trigger
[1]: timer_create now creates a thread per SIGEV_THREAD invocation, and each
thread will issue the timer callback without creating and destroying a new
thread per invocation.

The glibc allows the created thread to be cancellable or call pthread_exit, and
it is not fully specified how the POSIX timer should act in this case.  On the
current implementation, the next timer invocation will create a new thread, so
only timer_delete will unregister the kernel timer.  My RFC changes the
semantic that once the callback issues pthread_exit or the thread is cancelled,
no more timers are triggered. I think I will need to implement something like
musl that setjmp/longjmp to avoid the abort the cancellation process, although
I am not sure if is the correct or expected semantic.

Performance-wise it seems that it does use fewer CPU cycles:

--
#include <time.h>
#include <signal.h>
#include <assert.h>
#include <unistd.h>

static void
tf (union sigval arg)
{
}

int main (int argc, char *argv[])
{
  const struct itimerspec itval = {
    { 0, 150000000 },
    { 0, 150000000 },
  };

  struct sigevent sigev = {
    .sigev_notify = SIGEV_THREAD,
    .sigev_notify_function = tf,
    .sigev_notify_attributes = NULL,
    .sigev_value.sival_ptr = NULL,
  };

  enum { count = 128 };
  timer_t timers[count];

  for (int i = 0; i < count; i++)
    {
      assert (timer_create (CLOCK_REALTIME, &sigev, &timers[i]) == 0);
      assert (timer_settime (timers[i], 0, &itval, NULL) == 0);
    }

  sleep (5);

  for (int i = 0; i < count; i++)
    assert (timer_delete (timers[i]) == 0);

  sleep (2);

  return 0;
}
--

Master:

$ perf stat ./timer-bench

 Performance counter stats for './timer-bench':

            121,50 msec task-clock                #    0,017 CPUs utilized
               187      context-switches          #    1,539 K/sec
                 0      cpu-migrations            #    0,000 /sec
               187      page-faults               #    1,539 K/sec
       302.445.622      cycles                    #    2,489 GHz               
      (72,86%)
         3.089.604      stalled-cycles-frontend   #    1,02% frontend cycles
idle     (81,64%)
        47.543.776      stalled-cycles-backend    #   15,72% backend cycles
idle      (83,82%)
       287.475.386      instructions              #    0,95  insn per cycle
                                                  #    0,17  stalled cycles per
insn  (83,43%)
        48.578.487      branches                  #  399,835 M/sec             
      (72,20%)
           459.578      branch-misses             #    0,95% of all branches   
      (67,88%)

       7,001998495 seconds time elapsed

       0,013313000 seconds user
       0,150883000 seconds sys

VS my RFC branch:

x86_64-linux-gnu$ perf stat ./elf/ld.so --library-path . ../timer-bench

 Performance counter stats for './elf/ld.so --library-path . ../timer-bench':

             68,61 msec task-clock                #    0,010 CPUs utilized
             4.662      context-switches          #   67,949 K/sec
               161      cpu-migrations            #    2,347 K/sec
               326      page-faults               #    4,751 K/sec
       127.514.086      cycles                    #    1,859 GHz
         3.714.619      stalled-cycles-frontend   #    2,91% frontend cycles
idle
        10.407.556      stalled-cycles-backend    #    8,16% backend cycles
idle
        65.273.326      instructions              #    0,51  insn per cycle
                                                  #    0,16  stalled cycles per
insn
        13.918.731      branches                  #  202,866 M/sec             
      (0,46%)
     <not counted>      branch-misses                                          
      (0,00%)

       7,006109877 seconds time elapsed

       0,000000000 seconds user
       0,079625000 seconds sys

Although the extra threads might generate more context-switches/page-faults
since each one will eventually call sigwaitinfo. I have not analyzed latency,
although I would expect to also show better results.


[1]
https://sourceware.org/git/?p=glibc.git;a=shortlog;h=refs/heads/azanella/bz30558-posix_timer

-- 
You are receiving this mail because:
You are on the CC list for the bug.

  parent reply	other threads:[~2023-06-19 17:42 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-15 19:36 [Bug libc/30558] New: " stsp at users dot sourceforge.net
2023-06-15 21:04 ` [Bug libc/30558] " adhemerval.zanella at linaro dot org
2023-06-16  2:23 ` stsp at users dot sourceforge.net
2023-06-16  6:44 ` stsp at users dot sourceforge.net
2023-06-16  7:29 ` stsp at users dot sourceforge.net
2023-06-16  7:51 ` stsp at users dot sourceforge.net
2023-06-16 11:44 ` stsp at users dot sourceforge.net
2023-06-19 17:41 ` adhemerval.zanella at linaro dot org [this message]
2023-06-19 18:54 ` stsp at users dot sourceforge.net
2023-06-19 19:33 ` adhemerval.zanella at linaro dot org
2023-06-19 19:48 ` stsp at users dot sourceforge.net
2023-06-19 20:14 ` adhemerval.zanella at linaro dot org
2023-06-19 20:26 ` stsp at users dot sourceforge.net
2023-06-19 21:15 ` adhemerval.zanella at linaro dot org
2023-06-19 21:21 ` adhemerval.zanella at linaro dot org
2023-06-19 21:58 ` stsp at users dot sourceforge.net
2023-06-19 22:51 ` adhemerval.zanella at linaro dot org
2023-06-20  4:14 ` stsp at users dot sourceforge.net
2023-06-20 12:21 ` adhemerval.zanella at linaro dot org
2023-06-20 12:49 ` stsp at users dot sourceforge.net
2023-06-20 13:01 ` adhemerval.zanella at linaro dot org
2023-06-20 13:13 ` stsp at users dot sourceforge.net
2023-06-21  3:19 ` stsp at users dot sourceforge.net
2023-06-21 14:32 ` adhemerval.zanella at linaro dot org
2023-06-21 14:41 ` stsp at users dot sourceforge.net
2023-06-21 14:43 ` adhemerval.zanella at linaro dot org
2023-06-21 14:52 ` stsp at users dot sourceforge.net
2023-06-21 15:07 ` adhemerval.zanella at linaro dot org
2023-06-22  2:57 ` bugdal at aerifal dot cx
2023-06-22  5:23 ` stsp at users dot sourceforge.net
2023-06-23 18:34 ` adhemerval.zanella at linaro dot org
2023-06-24 17:03 ` crrodriguez at opensuse dot org

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bug-30558-131-LD53u5oI2J@http.sourceware.org/bugzilla/ \
    --to=sourceware-bugzilla@sourceware.org \
    --cc=glibc-bugs@sourceware.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).