public inbox for gcc-bugs@sourceware.org
help / color / mirror / Atom feed
From: "lh_mouse at 126 dot com" <gcc-bugzilla@gcc.gnu.org>
To: gcc-bugs@gcc.gnu.org
Subject: [Bug target/80878] -mcx16 (enable 128 bit CAS) on x86_64 seems not to work on 7.1.0
Date: Sun, 10 Dec 2023 09:54:10 +0000	[thread overview]
Message-ID: <bug-80878-4-9iYTZgLomP@http.gcc.gnu.org/bugzilla/> (raw)
In-Reply-To: <bug-80878-4@http.gcc.gnu.org/bugzilla/>

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=80878

--- Comment #42 from LIU Hao <lh_mouse at 126 dot com> ---
(In reply to Yongwei Wu from comment #27)
> Anyone can show a valid use case for a non-lock-free version of 128-bit
> atomic_compare_exchange?
> 
> I am trying to use it in a data structure intended to be lock-free. I am
> surprised to find that the C++ std::atomic::compare_exchange_weak does not
> result in lock-free code for a 128-bit struct intended for ABA-free CAS. As
> a result, the GCC-generated code is MUCH slower than the mutex-based version
> in my 8-thread contention test, defeating all its valid purposes. I am
> talking about a 10x difference. And the Clang-generated code is more than
> 200x faster in the same test.

[I think this is off topic though.]

I tested CMPXCHG16B with inline assembly on an i7-1165G7 (Dell XPS 13 9305) and
it turned out to be much slower than CMPXCHG, even slower than a pair of calls
to `pthread_mutex_lock()` and unlock. Similar results were observed on a
desktop i7 11700 and a server Xeon Cascadelake. The performance degeneration
might be caused by more μops, more locking work for the extra width of
operands, and more cache synchronization, which makes some sense if we assume
the CPU should be optimized mostly for 8-byte access.

The conclusion is probably that 16-byte compare-and-swap isn't recommended.

  parent reply	other threads:[~2023-12-10  9:54 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <bug-80878-4@http.gcc.gnu.org/bugzilla/>
2020-04-18 17:14 ` avi@cloudius-systems.com
2020-04-18 17:23 ` fw at gcc dot gnu.org
2020-04-18 17:31 ` avi@cloudius-systems.com
2020-04-18 17:40 ` fw at gcc dot gnu.org
2020-04-18 18:13 ` avi@cloudius-systems.com
2020-04-18 18:18 ` avi@cloudius-systems.com
2020-04-18 18:41 ` fw at gcc dot gnu.org
2021-03-14  3:21 ` wuyongwei at gmail dot com
2021-03-14  3:46 ` wuyongwei at gmail dot com
2021-03-16 16:02 ` wuyongwei at gmail dot com
2021-05-06 17:50 ` s_gccbugzilla at nedprod dot com
2021-05-06 19:32 ` pinskia at gcc dot gnu.org
2021-05-06 20:53 ` liblfds_gccbz at winterflaw dot net
2021-05-07 12:27 ` s_gccbugzilla at nedprod dot com
2021-05-07 14:07 ` redi at gcc dot gnu.org
2021-05-07 14:44 ` s_gccbugzilla at nedprod dot com
2022-11-03  9:55 ` lh_mouse at 126 dot com
2022-11-03 10:04 ` jakub at gcc dot gnu.org
2022-11-03 10:32 ` fw at gcc dot gnu.org
2022-11-03 11:16 ` admin_public at liblfds dot org
2023-11-16  1:33 ` lh_mouse at 126 dot com
2023-12-10  9:54 ` lh_mouse at 126 dot com [this message]
2023-12-10 10:32 ` admin_public at liblfds dot org

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bug-80878-4-9iYTZgLomP@http.gcc.gnu.org/bugzilla/ \
    --to=gcc-bugzilla@gcc.gnu.org \
    --cc=gcc-bugs@gcc.gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).