public inbox for gcc-bugs@sourceware.org
help / color / mirror / Atom feed
From: "tschwinge at gcc dot gnu.org" <gcc-bugzilla@gcc.gnu.org>
To: gcc-bugs@gcc.gnu.org
Subject: [Bug testsuite/113005] 'libgomp.fortran/rwlock_1.f90', 'libgomp.fortran/rwlock_3.f90' execution test timeouts
Date: Thu, 21 Dec 2023 11:41:13 +0000	[thread overview]
Message-ID: <bug-113005-4-EqepauQfsh@http.gcc.gnu.org/bugzilla/> (raw)
In-Reply-To: <bug-113005-4@http.gcc.gnu.org/bugzilla/>

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113005

Thomas Schwinge <tschwinge at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
          Component|libfortran                  |testsuite
   Last reconfirmed|                            |2023-12-21
             Target|powerpc64le-linux-gnu       |
                 CC|                            |burnus at gcc dot gnu.org,
                   |                            |jakub at gcc dot gnu.org
             Status|UNCONFIRMED                 |NEW
     Ever confirmed|0                           |1

--- Comment #1 from Thomas Schwinge <tschwinge at gcc dot gnu.org> ---
Turns out, this isn't actually specific to powerpc64le-linux-gnu, but rather
the following: my testing where I saw the timeouts was not build-tree 'make
check' testing, but instead "installed" testing (where you invoke 'runtest' on
a 'make install'ed GCC tree).  In that case, r266482 "Tweak libgomp env vars in
parallel make check (take 2)" is not in effect, that is, there's no limiting to
'OMP_NUM_THREADS=8'.

For example, manually running the '-O0' variant of
'libgomp.fortran/rwlock_1.f90' on a "big-iron" x86_64-pc-linux-gnu system:

    $ grep ^model\ name < /proc/cpuinfo | uniq -c
        256 model name  : AMD EPYC 7V13 64-Core Processor
    $ \time env OMP_NUM_THREADS=[...] LD_LIBRARY_PATH=[...] ./rwlock_1.exe

..., I produce the following data on an idle system:

'OMP_NUM_THREADS=8':

    0.16user 0.56system 0:02.36elapsed 31%CPU (0avgtext+0avgdata
4452maxresident)k
    0.17user 0.54system 0:02.30elapsed 30%CPU (0avgtext+0avgdata
4532maxresident)k

'OMP_NUM_THREADS=16':

    0.40user 1.03system 0:04.52elapsed 31%CPU (0avgtext+0avgdata
5832maxresident)k
    0.49user 0.99system 0:04.39elapsed 33%CPU (0avgtext+0avgdata
5876maxresident)k

'OMP_NUM_THREADS=32':

    0.98user 2.36system 0:09.33elapsed 35%CPU (0avgtext+0avgdata
8528maxresident)k
    0.98user 2.25system 0:09.02elapsed 35%CPU (0avgtext+0avgdata
8548maxresident)k

'OMP_NUM_THREADS=64':

    1.82user 5.83system 0:18.44elapsed 41%CPU (0avgtext+0avgdata
13952maxresident)k
    1.54user 6.03system 0:18.22elapsed 41%CPU (0avgtext+0avgdata
13996maxresident)k

'OMP_NUM_THREADS=128':

    3.71user 12.41system 0:38.02elapsed 42%CPU (0avgtext+0avgdata
24376maxresident)k
    3.96user 12.52system 0:39.34elapsed 41%CPU (0avgtext+0avgdata
24476maxresident)k

'OMP_NUM_THREADS=256' (or not set, for that matter):

    9.65user 25.19system 1:20.93elapsed 43%CPU (0avgtext+0avgdata
45816maxresident)k
    8.99user 25.82system 1:19.40elapsed 43%CPU (0avgtext+0avgdata
45636maxresident)k

For comparison, if I remove 'LD_LIBRARY_PATH', such that the system-wide GCC 10
libraries are used, I get for the latter case:

    9.28user 24.54system 1:22.09elapsed 41%CPU (0avgtext+0avgdata
45588maxresident)k
    11.26user 24.51system 1:24.32elapsed 42%CPU (0avgtext+0avgdata
45712maxresident)k

..., so only a little bit of an improvement of the new "rwlock" libgfortran vs.
old "mutex" GCC 10 one, curiously.  (But supposedly that depends on the
hardware or other factors?)

Anyway: should these test cases be limiting themselves to some lower
'OMP_NUM_THREADS', for example via 'num_threads' clauses?

The powerpc64le-linux-gnu systems:

    $ grep ^cpu < /proc/cpuinfo | uniq -c

    160 cpu             : POWER8 (raw), altivec supported

    152 cpu             : POWER8NVL (raw), altivec supported

    128 cpu             : POWER9, altivec supported

  reply	other threads:[~2023-12-21 11:41 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-13 20:46 [Bug libfortran/113005] New: " tschwinge at gcc dot gnu.org
2023-12-21 11:41 ` tschwinge at gcc dot gnu.org [this message]
2023-12-21 11:47 ` [Bug testsuite/113005] " jakub at gcc dot gnu.org
2023-12-21 11:50 ` jakub at gcc dot gnu.org
2023-12-22  6:52 ` lipeng.zhu at intel dot com
2023-12-22  8:48 ` jakub at gcc dot gnu.org
2023-12-22  8:59 ` jakub at gcc dot gnu.org
2023-12-22  9:02 ` jakub at gcc dot gnu.org
2023-12-22  9:04 ` jakub at gcc dot gnu.org
2023-12-22  9:59 ` lipeng.zhu at intel dot com
2023-12-22 11:14 ` jakub at gcc dot gnu.org
2023-12-25  2:55 ` lipeng.zhu at intel dot com
2023-12-25  6:22 ` lipeng.zhu at intel dot com
2023-12-25  7:31 ` lipeng.zhu at intel dot com
2024-01-22  1:02 ` lipeng.zhu at intel dot com

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bug-113005-4-EqepauQfsh@http.gcc.gnu.org/bugzilla/ \
    --to=gcc-bugzilla@gcc.gnu.org \
    --cc=gcc-bugs@gcc.gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).