public inbox for fortran@gcc.gnu.org
 help / color / mirror / Atom feed
From: "H.J. Lu" <hjl.tools@gmail.com>
To: "Zhu, Lipeng" <lipeng.zhu@intel.com>
Cc: Jakub Jelinek <jakub@redhat.com>,
	"fortran@gcc.gnu.org" <fortran@gcc.gnu.org>,
	 "gcc-patches@gcc.gnu.org" <gcc-patches@gcc.gnu.org>,
	"Deng, Pan" <pan.deng@intel.com>,
	 "rep.dot.nop@gmail.com" <rep.dot.nop@gmail.com>,
	"Li, Tianyou" <tianyou.li@intel.com>,
	 "tkoenig@netcologne.de" <tkoenig@netcologne.de>,
	"Guo, Wangyang" <wangyang.guo@intel.com>
Subject: Re: [PATCH v7] libgfortran: Replace mutex with rwlock
Date: Mon, 11 Dec 2023 09:45:54 -0800	[thread overview]
Message-ID: <CAMe9rOp2zcGCLSEnO4-Xf78oMpcHx7R=odvkTCG+khziAMV+CQ@mail.gmail.com> (raw)
In-Reply-To: <PH7PR11MB6056464BB099DCD145232AC59F88A@PH7PR11MB6056.namprd11.prod.outlook.com>

On Sat, Dec 9, 2023 at 7:25 PM Zhu, Lipeng <lipeng.zhu@intel.com> wrote:
>
> On 2023/12/9 23:23, Jakub Jelinek wrote:
> > On Sat, Dec 09, 2023 at 10:39:45AM -0500, Lipeng Zhu wrote:
> > > This patch try to introduce the rwlock and split the read/write to
> > > unit_root tree and unit_cache with rwlock instead of the mutex to
> > > increase CPU efficiency. In the get_gfc_unit function, the percentage
> > > to step into the insert_unit function is around 30%, in most
> > > instances, we can get the unit in the phase of reading the unit_cache
> > > or unit_root tree. So split the read/write phase by rwlock would be an
> > > approach to make it more parallel.
> > >
> > > BTW, the IPC metrics can gain around 9x in our test server with 220
> > > cores. The benchmark we used is https://github.com/rwesson/NEAT
> > >
> > > libgcc/ChangeLog:
> > >
> > >     * gthr-posix.h (__GTHREAD_RWLOCK_INIT): New macro.
> > >     (__gthrw): New function.
> > >     (__gthread_rwlock_rdlock): New function.
> > >     (__gthread_rwlock_tryrdlock): New function.
> > >     (__gthread_rwlock_wrlock): New function.
> > >     (__gthread_rwlock_trywrlock): New function.
> > >     (__gthread_rwlock_unlock): New function.
> > >
> > > libgfortran/ChangeLog:
> > >
> > >     * io/async.c (DEBUG_LINE): New macro.
> > >     * io/async.h (RWLOCK_DEBUG_ADD): New macro.
> > >     (CHECK_RDLOCK): New macro.
> > >     (CHECK_WRLOCK): New macro.
> > >     (TAIL_RWLOCK_DEBUG_QUEUE): New macro.
> > >     (IN_RWLOCK_DEBUG_QUEUE): New macro.
> > >     (RDLOCK): New macro.
> > >     (WRLOCK): New macro.
> > >     (RWUNLOCK): New macro.
> > >     (RD_TO_WRLOCK): New macro.
> > >     (INTERN_RDLOCK): New macro.
> > >     (INTERN_WRLOCK): New macro.
> > >     (INTERN_RWUNLOCK): New macro.
> > >     * io/io.h (struct gfc_unit): Change UNIT_LOCK to UNIT_RWLOCK in
> > >     a comment.
> > >     (unit_lock): Remove including associated internal_proto.
> > >     (unit_rwlock): New declarations including associated internal_proto.
> > >     (dec_waiting_unlocked): Use WRLOCK and RWUNLOCK on unit_rwlock
> > >     instead of __gthread_mutex_lock and __gthread_mutex_unlock on
> > >     unit_lock.
> > >     * io/transfer.c (st_read_done_worker): Use WRLOCK and RWUNLOCK
> > on
> > >     unit_rwlock instead of LOCK and UNLOCK on unit_lock.
> > >     (st_write_done_worker): Likewise.
> > >     * io/unit.c: Change UNIT_LOCK to UNIT_RWLOCK in 'IO locking rules'
> > >     comment. Use unit_rwlock variable instead of unit_lock variable.
> > >     (get_gfc_unit_from_unit_root): New function.
> > >     (get_gfc_unit): Use RDLOCK, WRLOCK and RWUNLOCK on unit_rwlock
> > >     instead of LOCK and UNLOCK on unit_lock.
> > >     (close_unit_1): Use WRLOCK and RWUNLOCK on unit_rwlock instead
> > of
> > >     LOCK and UNLOCK on unit_lock.
> > >     (close_units): Likewise.
> > >     (newunit_alloc): Use RWUNLOCK on unit_rwlock instead of UNLOCK on
> > >     unit_lock.
> > >     * io/unix.c (find_file): Use RDLOCK and RWUNLOCK on unit_rwlock
> > >     instead of LOCK and UNLOCK on unit_lock.
> > >     (flush_all_units): Use WRLOCK and RWUNLOCK on unit_rwlock instead
> > >     of LOCK and UNLOCK on unit_lock.
> >
> > Ok for trunk, thanks.
> >
> >       Jakub
>
> Thanks! Looking forward to landing to trunk.
>
> Lipeng Zhu

Pushed for you.

Thanks.

-- 
H.J.

  reply	other threads:[~2023-12-11 17:46 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-09  2:32 [PATCH v4] " Zhu, Lipeng
2023-05-16  7:08 ` Zhu, Lipeng
2023-05-23  2:53   ` Zhu, Lipeng
2023-05-24 19:18     ` Thomas Koenig
2023-08-18  3:06       ` Zhu, Lipeng
2023-09-14  8:33         ` Zhu, Lipeng
2023-10-23  1:21           ` Zhu, Lipeng
2023-10-23  5:52             ` Thomas Koenig
2023-10-23 23:59               ` Zhu, Lipeng
2023-11-01 10:14                 ` Zhu, Lipeng
2023-11-02  9:58                   ` Bernhard Reutner-Fischer
2023-11-23  9:36                     ` Zhu, Lipeng
2023-12-07  5:18                       ` Zhu, Lipeng
2023-08-18  3:18       ` [PATCH v6] " Zhu, Lipeng
2023-12-08 10:19         ` Jakub Jelinek
2023-12-09 15:13           ` Zhu, Lipeng
2023-12-09 15:39             ` [PATCH v7] " Lipeng Zhu
2023-12-09 15:23               ` Jakub Jelinek
2023-12-10  3:25                 ` Zhu, Lipeng
2023-12-11 17:45                   ` H.J. Lu [this message]
2023-12-12  2:05                     ` Zhu, Lipeng
2023-12-13 20:52                       ` Thomas Schwinge
2023-12-14  2:28                         ` Zhu, Lipeng
2023-12-14 12:29                           ` Thomas Schwinge
2023-12-14 12:39                             ` Jakub Jelinek
2023-12-15  5:43                               ` Zhu, Lipeng
2023-12-21 11:42                         ` Thomas Schwinge
2023-12-22  6:48                           ` Lipeng Zhu
2024-01-03  9:14                           ` Lipeng Zhu
2024-01-17 13:25                             ` Lipeng Zhu
2023-12-14 15:50               ` Richard Earnshaw (lists)
2023-12-15 11:31                 ` Lipeng Zhu
2023-12-15 19:23                   ` Richard Earnshaw
2024-01-02 11:57                     ` Vaseeharan Vinayagamoorthy
2024-01-03  1:02                       ` Lipeng Zhu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAMe9rOp2zcGCLSEnO4-Xf78oMpcHx7R=odvkTCG+khziAMV+CQ@mail.gmail.com' \
    --to=hjl.tools@gmail.com \
    --cc=fortran@gcc.gnu.org \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=jakub@redhat.com \
    --cc=lipeng.zhu@intel.com \
    --cc=pan.deng@intel.com \
    --cc=rep.dot.nop@gmail.com \
    --cc=tianyou.li@intel.com \
    --cc=tkoenig@netcologne.de \
    --cc=wangyang.guo@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).