public inbox for gcc-bugs@sourceware.org help / color / mirror / Atom feed
From: "campbell+gcc-bugzilla at mumble dot net" <gcc-bugzilla@gcc.gnu.org> To: gcc-bugs@gcc.gnu.org Subject: [Bug target/110592] [SPARC] GCC should default to TSO memory model when compiling for sparc32 Date: Wed, 12 Jul 2023 20:36:17 +0000 [thread overview] Message-ID: <bug-110592-4-jGshdWG9fp@http.gcc.gnu.org/bugzilla/> (raw) In-Reply-To: <bug-110592-4@http.gcc.gnu.org/bugzilla/> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110592 --- Comment #10 from Taylor R Campbell <campbell+gcc-bugzilla at mumble dot net> --- (In reply to Eric Botcazou from comment #9) > > I don't understand, how would that help? As I understand it, whenever > > `-mcpu=v7', the memory model is just ignored -- even if we set it to TSO -- > > because all rules that depend on it are gated on TARGET_V8 || TARGET_V9 or > > simila > > Well, the subject of the PR is "GCC should default to TSO memory model when > compiling for sparc32" so you'll get exactly that. But defaulting to TSO doesn't seem to help with generating LDSTUB in sparcv7-only instruction streams, unless I misunderstand how this is different from trying to combine `-mcpu' and `-mmemory-model'? > So you want to mix memory models and synchronization instructions with > -mcpu=v7, although they were introduced in the V8 architecture? Correct. The idea is to have a way to generate code that works both on sparcv7 -- by avoiding v8-only instructions like SMUL/UMUL, as `-mcpu=v7' does -- and on sparcv8 -- by generating LDSTUB instructions where store-before-load ordering is needed, as `-mcpu=v8 -mmemory-model=tso' does. I tried to spell this request as `-mcpu=v7 -mmemory-model=tso' but that doesn't generate the LDSTUB instructions needed for store-before-load ordering. (Note that LDSTUB is available in v7 -- what's new in v8 is the relaxation of store-before-load order of TSO, in contrast to SC. So these requirements aren't contradictory.) Is that how Linux and Solaris work by default? I wasn't able to elicit that behaviour by combining explicit `-mcpu' and `-mmemory-model' options, so I assumed that it wouldn't be possible for it to be the default -- and I don't see how it could work given how the code generation rules for memory barriers are gated on TARGET_V8 || TARGET_V9 or similar.
prev parent reply other threads:[~2023-07-12 20:36 UTC|newest] Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top 2023-07-08 1:46 [Bug target/110592] New: " koachan+gccbugs at protonmail dot com 2023-07-08 8:25 ` [Bug target/110592] " ebotcazou at gcc dot gnu.org 2023-07-08 9:42 ` ebotcazou at gcc dot gnu.org 2023-07-09 13:02 ` martin at netbsd dot org 2023-07-09 17:47 ` ebotcazou at gcc dot gnu.org 2023-07-10 13:19 ` campbell+gcc-bugzilla at mumble dot net 2023-07-12 9:31 ` ebotcazou at gcc dot gnu.org 2023-07-12 12:17 ` campbell+gcc-bugzilla at mumble dot net 2023-07-12 14:58 ` koachan+gccbugs at protonmail dot com 2023-07-12 17:16 ` ebotcazou at gcc dot gnu.org 2023-07-12 20:36 ` campbell+gcc-bugzilla at mumble dot net [this message]
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=bug-110592-4-jGshdWG9fp@http.gcc.gnu.org/bugzilla/ \ --to=gcc-bugzilla@gcc.gnu.org \ --cc=gcc-bugs@gcc.gnu.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).