public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
From: Peng Zheng <pengzheng@apache.org>
To: libc-alpha@sourceware.org
Subject: Priority Inversion and Unlimited Spin of pthread_rwlock_t
Date: Tue, 12 Mar 2024 11:19:53 +0800	[thread overview]
Message-ID: <0e210b7b-5349-41fe-a476-87958990e03e@apache.org> (raw)

[-- Attachment #1: Type: text/plain, Size: 2577 bytes --]

Hi,

I found that there are several unlimited spins in the current 
pthread_rwlock's implementation.
Therefore, it suffers from the same issue of user-space spinlocks as 
mentioned in this LWN article ([0]):

 > One thread might be spinning on a lock while the holder has been 
preempted and isn't running at all. In such cases, the lock will not be 
released soon, and the spinning just wastes CPU time. In the worst case, 
the thread that is spinning may be the one that is keeping the lock 
holder from running, meaning that the spinning thread is actively 
preventing the lock it needs from being released. In such situations, 
the code should simply stop spinning and go to sleep until the lock is 
released.

I just encountered one such issue in an embedded Linux system: there 
were several readers of priority SCHED_RR, and one writer of priority 
SCHED_OTHER.

It was found that two high priority readers are spinning (consuming 100% 
CPU) within the loop near the end of `__pthread_rwlock_rdlock_full`:

   for (;;)
     {
       while (((wpf = atomic_load_relaxed (&rwlock->__data.__wrphase_futex))
           | PTHREAD_RWLOCK_FUTEX_USED) == (1 | PTHREAD_RWLOCK_FUTEX_USED))
       {/*omitted*/}
       if (ready)
     /* See below.  */
              break;
       if ((atomic_load_acquire (&rwlock->__data.__readers)
        & PTHREAD_RWLOCK_WRPHASE) == 0)
             ready = true;
     }
   return 0;

And the SCHED_OTHER writer was just about to enable the 
`__wrphase_futex` in `__pthread_rwlock_wrlock_full` (just one ARM 
instruction away)
but never able to do that (the two readers ate nearly all available CPUs):

   while ((r & PTHREAD_RWLOCK_WRPHASE) == 0
      && (r >> PTHREAD_RWLOCK_READER_SHIFT) == 0)
     {
       if (atomic_compare_exchange_weak_acquire (&rwlock->__data.__readers,
                         &r, r | PTHREAD_RWLOCK_WRPHASE))
     {
       atomic_store_relaxed (&rwlock->__data.__wrphase_futex, 1);  
/* writer was stuck HERE! */

       goto done;
     }
       /* TODO Back-off.  */
     }

In ARM assembly:

move r3, #1 ; the writer is stuck HERE!
str r3,[r12,#8] ; r12 holds the address of rwlock->__data, and 8 is the 
offset of __readers in __data


Unlimited user space spin is too dangerous to be used, how about 
limiting the total number of spins before suspending using futex? Or 
using rseq as mentioned in the LWN artible?

Any ideas?

[0] https://lwn.net/Articles/944895/

Regards,

-- 
Peng Zheng

             reply	other threads:[~2024-03-12  3:19 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-12  3:19 Peng Zheng [this message]
2024-03-12  3:48 ` Peng Zheng
2024-03-14  7:32   ` Peng Zheng
2024-03-12 14:39 ` Florian Weimer
2024-03-13  1:48   ` Peng Zheng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0e210b7b-5349-41fe-a476-87958990e03e@apache.org \
    --to=pengzheng@apache.org \
    --cc=libc-alpha@sourceware.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).