public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
From: Torvald Riegel <triegel@redhat.com>
To: David Miller <davem@davemloft.net>
Cc: adhemerval.zanella@linaro.org, andreas@gaisler.com,
	libc-alpha@sourceware.org, software@gaisler.com
Subject: Re: Remove sparcv8 support
Date: Thu, 27 Oct 2016 10:54:00 -0000	[thread overview]
Message-ID: <1477565575.7146.199.camel@localhost.localdomain> (raw)
In-Reply-To: <20161026.144741.1659367414224835783.davem@davemloft.net>

On Wed, 2016-10-26 at 14:47 -0400, David Miller wrote:
> From: Adhemerval Zanella <adhemerval.zanella@linaro.org>
> Date: Wed, 26 Oct 2016 16:02:50 -0200
> 
> >> I am not sure it is as simple as that. Even if the kernel makes sure
> >> that an emulated CAS is atomic against another emulated CAS, it would
> >> not guarantee atomicity against a plain store instruction on a different
> >> CPU, right? For the emulated CAS to work on an SMP system I would think
> >> the atomic_store_relaxed and atomic_store_release functions would also
> >> need to be handled by the kernel, locking the write out when the CAS is
> >> emulated, to keep the interaction linearizable.
> >> 
> > 
> > I would expect kernel to emulate all the define atomic operation defined
> > in ISA to provide correct atomic semantic. I am not really sure how
> > feasible it would be, but the idea is from library standpoint running
> > on a machine with a emulated atomic provided by kernel is semantic
> > equal to running on a machine with hardware provided atomic.
> > 
> > And I think it would be not feasible to keep pushing for C11 atomics
> > on glibc if we can not guarantee it. 
> 
> Plain stores would semantically not be allowed on such a shared value
> anyways.
> 
> If atomicity is required, then nobody should do direct stores.  Direct
> stores are unchecked and non-atomic.  Whether the kernel implements
> the CAS or the cpu does it directly has no bearing on this issue.
> 
> All entities should always use the CAS operation to modify such values.

I'm not quite sure what you're trying to say, so I'll make a few general
comments that will hopefully help clarify.

It is true that we do want to use the C11 memory model throughout glibc,
which means a data-race-freedom requirement for glibc code, which in
turn means having to use atomic operations (ie, atomic_* ()) whenever
there would be a data race (as defined by C11) otherwise.

The implementation of atomic_*() in glibc is an exception to that rule,
in that on some systems we may know that in a controlled environment
(eg, function not inlined or volatile used), the compiler will generate
code for a plain store/load in the implementation of an atomic_*()
function that is equivalent to a relaxed MO atomic store/load (including
effects of fences).
This makes concurrent relaxed MO loads/stores work.

If we also have a non-multi-core system, entering the kernel emulation
for CAS then stops all other execution on the system, so the CAS
emulation in the kernel is atomic.
If instead we have a multi-core system, either the kernel would have to
temporarily stop all other cores while emulating the CAS, or all
atomic_*() would have to use the kernel.  Which of these two options is
better is hard to say upfront.

  parent reply	other threads:[~2016-10-27 10:54 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-20 19:47 Adhemerval Zanella
2016-10-20 20:56 ` David Miller
2016-10-21  9:02 ` Andreas Larsson
2016-10-21 13:13   ` Adhemerval Zanella
2016-10-21 15:03     ` David Miller
2016-10-24 17:14       ` Torvald Riegel
2016-10-24 17:25   ` Torvald Riegel
2016-10-24 17:43     ` Adhemerval Zanella
2016-10-25 14:34       ` Andreas Larsson
2016-10-25 14:45         ` Adhemerval Zanella
2016-10-26 14:46           ` Andreas Larsson
2016-10-26 18:03             ` Adhemerval Zanella
2016-10-26 18:47               ` David Miller
2016-10-26 19:39                 ` Adhemerval Zanella
2016-10-27 10:54                 ` Torvald Riegel [this message]
2016-10-27 14:36                   ` Carlos O'Donell
2016-11-07 16:38                     ` David Miller
2016-11-07 21:21                       ` Sam Ravnborg
2016-11-08  1:06                         ` David Miller
2016-11-09  5:49                           ` Sam Ravnborg
2016-11-10 23:33                             ` David Miller
2016-11-09 17:08                       ` Torvald Riegel
2016-11-09 17:16                         ` David Miller
2016-11-10  5:05                           ` Torvald Riegel
2016-11-10 16:41                           ` Chris Metcalf
2016-11-10 17:08                             ` Torvald Riegel
2016-11-10 18:22                               ` Chris Metcalf
2016-11-10 23:38                                 ` Torvald Riegel
2016-10-27 10:38             ` Torvald Riegel
2016-11-01 15:27               ` Andreas Larsson
2016-10-25 14:34     ` Andreas Larsson
2016-10-25 16:22       ` Torvald Riegel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1477565575.7146.199.camel@localhost.localdomain \
    --to=triegel@redhat.com \
    --cc=adhemerval.zanella@linaro.org \
    --cc=andreas@gaisler.com \
    --cc=davem@davemloft.net \
    --cc=libc-alpha@sourceware.org \
    --cc=software@gaisler.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).