From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by sourceware.org (Postfix) with ESMTPS id 84B723857BB2 for ; Tue, 18 Apr 2023 20:17:16 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 84B723857BB2 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=rivosinc.com Received: by mail-pf1-x42d.google.com with SMTP id d2e1a72fcca58-63b35789313so1970043b3a.3 for ; Tue, 18 Apr 2023 13:17:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20221208.gappssmtp.com; s=20221208; t=1681849035; x=1684441035; h=content-transfer-encoding:mime-version:message-id:to:from:cc :in-reply-to:subject:date:from:to:cc:subject:date:message-id :reply-to; bh=JaIec3cxxgcmS8jQRgSDq9DAMl4yK4qrI+OzOyitEx4=; b=zzHImVqHxq/eonHwKI9vvahwtyNDFRksgCjvgfvt7TCaQiIqO+cYdHHAx05Bo1QBLN RZNHqzAUwDUBghNd5lJccK/o9Nr+yFddEUCJvkTX4CHhbhcYS9nNVp+EaeQDNBBrB82D tOUvOK7rCa6sx6jqilBNzK4EGtgLI87Qi1aTe+/Kl6elmKES+c1YaW/WLFaUWoXdLh7p pe5/8HHh7Tdf23Ub/tCab31aYhLErqTLxH+nCQjL/kV3dyP4h3vjKGlUic8qBbwVKpnK JuWk+kiReFuXQFERJBmj8ZfInm8elO25sEO0a6xbloxDRcw3HLCI5zPdPsneDT+ADgwk 4a3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681849035; x=1684441035; h=content-transfer-encoding:mime-version:message-id:to:from:cc :in-reply-to:subject:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=JaIec3cxxgcmS8jQRgSDq9DAMl4yK4qrI+OzOyitEx4=; b=iAAnkG1DsKxV5N4z/0KT6AU1K1ZpiYWaTDIJBM9ZEVsgKHSUFwxgtqhnKYf1utZpta NWBX4vVNTKw3Tq25Ei+NV8qsGoFNUDaz8DhGiZ3zvykoU39kyonE4AaxCGSjUfAvG018 hPb4TJGTaBW6PCD608eqOLyECXpeBJXb6OjCPGXn61qJpyeRpu0FhiUfniyBJokvmo1E /aASwC7yUfiRIHbxCX63DSf6j5iPDE5PSzL3ZtQIaLCZ5UXNF2n9g3AMmY+22gnleM9Z rp0bRaPZCDcysjLz27diquRqwvYgLP/KCjQJarVShUR+zPQhRxiLpfJ5c+04YASbQhZi LaBQ== X-Gm-Message-State: AAQBX9fge8xhR+nfikMmNlumdjZWkg+i4YMxVv3GXpLlMMIeA0RwB62e 3XwC6w28KTxgBNQ+m5ZJrgXvFejI0jIU3yiuGhA= X-Google-Smtp-Source: AKy350ZDW/U9vG3sMUT06HaNFe49RS6ZVARTE859ZOJLAneWAHy5HPLCTXhh+d/tYVzD9YFyME+QLg== X-Received: by 2002:a05:6a00:99a:b0:624:7c9a:c832 with SMTP id u26-20020a056a00099a00b006247c9ac832mr1113733pfg.8.1681849034700; Tue, 18 Apr 2023 13:17:14 -0700 (PDT) Received: from localhost ([50.221.140.188]) by smtp.gmail.com with ESMTPSA id p16-20020a62ab10000000b0063b6e3e5a39sm7580149pff.52.2023.04.18.13.17.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 18 Apr 2023 13:17:13 -0700 (PDT) Date: Tue, 18 Apr 2023 13:17:13 -0700 (PDT) X-Google-Original-Date: Tue, 18 Apr 2023 13:17:01 PDT (-0700) Subject: Re: [PATCH v6] RISCV: Inline subword atomic ops In-Reply-To: <20230418163913.2429812-1-patrick@rivosinc.com> CC: gcc-patches@gcc.gnu.org, Kito Cheng , david.abd@gmail.com, jeffreyalaw@gmail.com, schwab@linux-m68k.org, Patrick O'Neill From: Palmer Dabbelt To: Patrick O'Neill Message-ID: Mime-Version: 1.0 (MHng) Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-11.7 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,GIT_PATCH_0,KAM_SHORT,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: On Tue, 18 Apr 2023 09:39:13 PDT (-0700), Patrick O'Neill wrote: > RISC-V has no support for subword atomic operations; code currently > generates libatomic library calls. > > This patch changes the default behavior to inline subword atomic calls > (using the same logic as the existing library call). > Behavior can be specified using the -minline-atomics and > -mno-inline-atomics command line flags. > > gcc/libgcc/config/riscv/atomic.c has the same logic implemented in asm. > This will need to stay for backwards compatibility and the > -mno-inline-atomics flag. > > 2023-04-18 Patrick O'Neill > > PR target/104338 > * riscv-protos.h: Add helper function stubs. > * riscv.cc: Add helper functions for subword masking. > * riscv.opt: Add command-line flag. > * sync.md: Add masking logic and inline asm for fetch_and_op, > fetch_and_nand, CAS, and exchange ops. > * invoke.texi: Add blurb regarding command-line flag. > * inline-atomics-1.c: New test. > * inline-atomics-2.c: Likewise. > * inline-atomics-3.c: Likewise. > * inline-atomics-4.c: Likewise. > * inline-atomics-5.c: Likewise. > * inline-atomics-6.c: Likewise. > * inline-atomics-7.c: Likewise. > * inline-atomics-8.c: Likewise. > * atomic.c: Add reference to duplicate logic. > > Signed-off-by: Patrick O'Neill > Signed-off-by: Palmer Dabbelt > --- > v5: https://inbox.sourceware.org/gcc-patches/20230418142858.2424851-1-patrick@rivosinc.com/ > > Addressed Andreas Schwab's comments about the flags/documentation. > https://inbox.sourceware.org/gcc-patches/87y1mpb57m.fsf@igel.home/ > > No new failures on trunk. Looks like Jeff had some comments as well. IMO we should be targeting this for gcc-13: it's enough of a headache for distros that they'll likely backport it anyway, so we might as well just take on the pain ourselves. Since Jeff and Kito have chimed in on the code I'll let them have some time to look, I wrote some of it so I'm OK with it. > --- > The mapping implemented here matches Libatomic. That mapping changes if > "Implement ISA Manual Table A.6 Mappings" is merged. Depending on which > patch is merged first, I will update the other to make sure the > correct mapping is emitted. > https://gcc.gnu.org/pipermail/gcc-patches/2023-April/615748.html > --- > gcc/config/riscv/riscv-protos.h | 2 + > gcc/config/riscv/riscv.cc | 50 ++ > gcc/config/riscv/riscv.opt | 4 + > gcc/config/riscv/sync.md | 314 ++++++++++ > gcc/doc/invoke.texi | 10 +- > .../gcc.target/riscv/inline-atomics-1.c | 18 + > .../gcc.target/riscv/inline-atomics-2.c | 19 + > .../gcc.target/riscv/inline-atomics-3.c | 569 ++++++++++++++++++ > .../gcc.target/riscv/inline-atomics-4.c | 566 +++++++++++++++++ > .../gcc.target/riscv/inline-atomics-5.c | 87 +++ > .../gcc.target/riscv/inline-atomics-6.c | 87 +++ > .../gcc.target/riscv/inline-atomics-7.c | 69 +++ > .../gcc.target/riscv/inline-atomics-8.c | 69 +++ > libgcc/config/riscv/atomic.c | 2 + > 14 files changed, 1865 insertions(+), 1 deletion(-) > create mode 100644 gcc/testsuite/gcc.target/riscv/inline-atomics-1.c > create mode 100644 gcc/testsuite/gcc.target/riscv/inline-atomics-2.c > create mode 100644 gcc/testsuite/gcc.target/riscv/inline-atomics-3.c > create mode 100644 gcc/testsuite/gcc.target/riscv/inline-atomics-4.c > create mode 100644 gcc/testsuite/gcc.target/riscv/inline-atomics-5.c > create mode 100644 gcc/testsuite/gcc.target/riscv/inline-atomics-6.c > create mode 100644 gcc/testsuite/gcc.target/riscv/inline-atomics-7.c > create mode 100644 gcc/testsuite/gcc.target/riscv/inline-atomics-8.c > > diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h > index 5244e8dcbf0..02b33e02020 100644 > --- a/gcc/config/riscv/riscv-protos.h > +++ b/gcc/config/riscv/riscv-protos.h > @@ -79,6 +79,8 @@ extern void riscv_reinit (void); > extern poly_uint64 riscv_regmode_natural_size (machine_mode); > extern bool riscv_v_ext_vector_mode_p (machine_mode); > extern bool riscv_shamt_matches_mask_p (int, HOST_WIDE_INT); > +extern void riscv_subword_address (rtx, rtx *, rtx *, rtx *, rtx *); > +extern void riscv_lshift_subword (machine_mode, rtx, rtx, rtx *); > > /* Routines implemented in riscv-c.cc. */ > void riscv_cpu_cpp_builtins (cpp_reader *); > diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc > index e4937d1af25..fa0247be22f 100644 > --- a/gcc/config/riscv/riscv.cc > +++ b/gcc/config/riscv/riscv.cc > @@ -7143,6 +7143,56 @@ riscv_zero_call_used_regs (HARD_REG_SET need_zeroed_hardregs) > & ~zeroed_hardregs); > } > > +/* Helper function for extracting a subword from memory. */ > + > +void > +riscv_subword_address (rtx mem, rtx *aligned_mem, rtx *shift, rtx *mask, > + rtx *not_mask) > +{ > + /* Align the memory addess to a word. */ > + rtx addr = force_reg (Pmode, XEXP (mem, 0)); > + > + rtx aligned_addr = gen_reg_rtx (Pmode); > + emit_move_insn (aligned_addr, gen_rtx_AND (Pmode, addr, > + gen_int_mode (-4, Pmode))); > + > + *aligned_mem = change_address (mem, SImode, aligned_addr); > + > + /* Calculate the shift amount. */ > + emit_move_insn (*shift, gen_rtx_AND (SImode, gen_lowpart (SImode, addr), > + gen_int_mode (3, SImode))); > + emit_move_insn (*shift, gen_rtx_ASHIFT (SImode, *shift, > + gen_int_mode(3, SImode))); > + > + /* Calculate the mask. */ > + int unshifted_mask; > + if (GET_MODE (mem) == QImode) > + unshifted_mask = 0xFF; > + else > + unshifted_mask = 0xFFFF; > + > + emit_move_insn (*mask, gen_int_mode(unshifted_mask, SImode)); > + > + emit_move_insn (*mask, gen_rtx_ASHIFT(SImode, *mask, > + gen_lowpart (QImode, *shift))); > + > + emit_move_insn (*not_mask, gen_rtx_NOT(SImode, *mask)); > +} > + > +/* Leftshift a subword within an SImode register. */ > + > +void > +riscv_lshift_subword (machine_mode mode, rtx value, rtx shift, > + rtx *shifted_value) > +{ > + rtx value_reg = gen_reg_rtx (SImode); > + emit_move_insn (value_reg, simplify_gen_subreg (SImode, value, > + mode, 0)); > + > + emit_move_insn(*shifted_value, gen_rtx_ASHIFT(SImode, value_reg, > + gen_lowpart (QImode, shift))); > +} > + > /* Initialize the GCC target structure. */ > #undef TARGET_ASM_ALIGNED_HI_OP > #define TARGET_ASM_ALIGNED_HI_OP "\t.half\t" > diff --git a/gcc/config/riscv/riscv.opt b/gcc/config/riscv/riscv.opt > index ff1dd4ddd4f..bc5e63ab3e6 100644 > --- a/gcc/config/riscv/riscv.opt > +++ b/gcc/config/riscv/riscv.opt > @@ -254,3 +254,7 @@ Enum(isa_spec_class) String(20191213) Value(ISA_SPEC_CLASS_20191213) > misa-spec= > Target RejectNegative Joined Enum(isa_spec_class) Var(riscv_isa_spec) Init(TARGET_DEFAULT_ISA_SPEC) > Set the version of RISC-V ISA spec. > + > +minline-atomics > +Target Var(TARGET_INLINE_SUBWORD_ATOMIC) Init(1) > +Always inline subword atomic operations. > diff --git a/gcc/config/riscv/sync.md b/gcc/config/riscv/sync.md > index c932ef87b9d..5adbe08e2d6 100644 > --- a/gcc/config/riscv/sync.md > +++ b/gcc/config/riscv/sync.md > @@ -21,8 +21,11 @@ > > (define_c_enum "unspec" [ > UNSPEC_COMPARE_AND_SWAP > + UNSPEC_COMPARE_AND_SWAP_SUBWORD > UNSPEC_SYNC_OLD_OP > + UNSPEC_SYNC_OLD_OP_SUBWORD > UNSPEC_SYNC_EXCHANGE > + UNSPEC_SYNC_EXCHANGE_SUBWORD > UNSPEC_ATOMIC_STORE > UNSPEC_MEMORY_BARRIER > ]) > @@ -91,6 +94,143 @@ > [(set_attr "type" "atomic") > (set (attr "length") (const_int 8))]) > > +(define_insn "subword_atomic_fetch_strong_" > + [(set (match_operand:SI 0 "register_operand" "=&r") ;; old value at mem > + (match_operand:SI 1 "memory_operand" "+A")) ;; mem location > + (set (match_dup 1) > + (unspec_volatile:SI > + [(any_atomic:SI (match_dup 1) > + (match_operand:SI 2 "register_operand" "rI")) ;; value for op > + (match_operand:SI 3 "register_operand" "rI")] ;; mask > + UNSPEC_SYNC_OLD_OP_SUBWORD)) > + (match_operand:SI 4 "register_operand" "rI") ;; not_mask > + (clobber (match_scratch:SI 5 "=&r")) ;; tmp_1 > + (clobber (match_scratch:SI 6 "=&r"))] ;; tmp_2 > + "TARGET_ATOMIC && TARGET_INLINE_SUBWORD_ATOMIC" > + { > + return "1:\;" > + "lr.w.aq\t%0, %1\;" > + "\t%5, %0, %2\;" > + "and\t%5, %5, %3\;" > + "and\t%6, %0, %4\;" > + "or\t%6, %6, %5\;" > + "sc.w.rl\t%5, %6, %1\;" > + "bnez\t%5, 1b"; > + } > + [(set (attr "length") (const_int 28))]) > + > +(define_expand "atomic_fetch_nand" > + [(set (match_operand:SHORT 0 "register_operand" "=&r") > + (match_operand:SHORT 1 "memory_operand" "+A")) > + (set (match_dup 1) > + (unspec_volatile:SHORT > + [(not:SHORT (and:SHORT (match_dup 1) > + (match_operand:SHORT 2 "reg_or_0_operand" "rJ"))) > + (match_operand:SI 3 "const_int_operand")] ;; model > + UNSPEC_SYNC_OLD_OP_SUBWORD))] > + "TARGET_ATOMIC && TARGET_INLINE_SUBWORD_ATOMIC" > +{ > + /* We have no QImode/HImode atomics, so form a mask, then use > + subword_atomic_fetch_strong_nand to implement a LR/SC version of the > + operation. */ > + > + /* Logic duplicated in gcc/libgcc/config/riscv/atomic.c for use when inlining > + is disabled */ > + > + rtx old = gen_reg_rtx (SImode); > + rtx mem = operands[1]; > + rtx value = operands[2]; > + rtx aligned_mem = gen_reg_rtx (SImode); > + rtx shift = gen_reg_rtx (SImode); > + rtx mask = gen_reg_rtx (SImode); > + rtx not_mask = gen_reg_rtx (SImode); > + > + riscv_subword_address (mem, &aligned_mem, &shift, &mask, ¬_mask); > + > + rtx shifted_value = gen_reg_rtx (SImode); > + riscv_lshift_subword (mode, value, shift, &shifted_value); > + > + emit_insn (gen_subword_atomic_fetch_strong_nand (old, aligned_mem, > + shifted_value, > + mask, not_mask)); > + > + emit_move_insn (old, gen_rtx_ASHIFTRT (SImode, old, > + gen_lowpart (QImode, shift))); > + > + emit_move_insn (operands[0], gen_lowpart (mode, old)); > + > + DONE; > +}) > + > +(define_insn "subword_atomic_fetch_strong_nand" > + [(set (match_operand:SI 0 "register_operand" "=&r") ;; old value at mem > + (match_operand:SI 1 "memory_operand" "+A")) ;; mem location > + (set (match_dup 1) > + (unspec_volatile:SI > + [(not:SI (and:SI (match_dup 1) > + (match_operand:SI 2 "register_operand" "rI"))) ;; value for op > + (match_operand:SI 3 "register_operand" "rI")] ;; mask > + UNSPEC_SYNC_OLD_OP_SUBWORD)) > + (match_operand:SI 4 "register_operand" "rI") ;; not_mask > + (clobber (match_scratch:SI 5 "=&r")) ;; tmp_1 > + (clobber (match_scratch:SI 6 "=&r"))] ;; tmp_2 > + "TARGET_ATOMIC && TARGET_INLINE_SUBWORD_ATOMIC" > + { > + return "1:\;" > + "lr.w.aq\t%0, %1\;" > + "and\t%5, %0, %2\;" > + "not\t%5, %5\;" > + "and\t%5, %5, %3\;" > + "and\t%6, %0, %4\;" > + "or\t%6, %6, %5\;" > + "sc.w.rl\t%5, %6, %1\;" > + "bnez\t%5, 1b"; > + } > + [(set (attr "length") (const_int 32))]) > + > +(define_expand "atomic_fetch_" > + [(set (match_operand:SHORT 0 "register_operand" "=&r") ;; old value at mem > + (match_operand:SHORT 1 "memory_operand" "+A")) ;; mem location > + (set (match_dup 1) > + (unspec_volatile:SHORT > + [(any_atomic:SHORT (match_dup 1) > + (match_operand:SHORT 2 "reg_or_0_operand" "rJ")) ;; value for op > + (match_operand:SI 3 "const_int_operand")] ;; model > + UNSPEC_SYNC_OLD_OP_SUBWORD))] > + "TARGET_ATOMIC && TARGET_INLINE_SUBWORD_ATOMIC" > +{ > + /* We have no QImode/HImode atomics, so form a mask, then use > + subword_atomic_fetch_strong_ to implement a LR/SC version of the > + operation. */ > + > + /* Logic duplicated in gcc/libgcc/config/riscv/atomic.c for use when inlining > + is disabled */ > + > + rtx old = gen_reg_rtx (SImode); > + rtx mem = operands[1]; > + rtx value = operands[2]; > + rtx aligned_mem = gen_reg_rtx (SImode); > + rtx shift = gen_reg_rtx (SImode); > + rtx mask = gen_reg_rtx (SImode); > + rtx not_mask = gen_reg_rtx (SImode); > + > + riscv_subword_address (mem, &aligned_mem, &shift, &mask, ¬_mask); > + > + rtx shifted_value = gen_reg_rtx (SImode); > + riscv_lshift_subword (mode, value, shift, &shifted_value); > + > + emit_insn (gen_subword_atomic_fetch_strong_ (old, aligned_mem, > + shifted_value, > + mask, not_mask)); > + > + emit_move_insn (old, gen_rtx_ASHIFTRT (SImode, old, > + gen_lowpart (QImode, shift))); > + > + emit_move_insn (operands[0], gen_lowpart (mode, old)); > + > + DONE; > +}) > + > (define_insn "atomic_exchange" > [(set (match_operand:GPR 0 "register_operand" "=&r") > (unspec_volatile:GPR > @@ -104,6 +244,59 @@ > [(set_attr "type" "atomic") > (set (attr "length") (const_int 8))]) > > +(define_expand "atomic_exchange" > + [(set (match_operand:SHORT 0 "register_operand" "=&r") > + (unspec_volatile:SHORT > + [(match_operand:SHORT 1 "memory_operand" "+A") > + (match_operand:SI 3 "const_int_operand")] ;; model > + UNSPEC_SYNC_EXCHANGE_SUBWORD)) > + (set (match_dup 1) > + (match_operand:SHORT 2 "register_operand" "0"))] > + "TARGET_ATOMIC && TARGET_INLINE_SUBWORD_ATOMIC" > +{ > + rtx old = gen_reg_rtx (SImode); > + rtx mem = operands[1]; > + rtx value = operands[2]; > + rtx aligned_mem = gen_reg_rtx (SImode); > + rtx shift = gen_reg_rtx (SImode); > + rtx mask = gen_reg_rtx (SImode); > + rtx not_mask = gen_reg_rtx (SImode); > + > + riscv_subword_address (mem, &aligned_mem, &shift, &mask, ¬_mask); > + > + rtx shifted_value = gen_reg_rtx (SImode); > + riscv_lshift_subword (mode, value, shift, &shifted_value); > + > + emit_insn (gen_subword_atomic_exchange_strong (old, aligned_mem, > + shifted_value, not_mask)); > + > + emit_move_insn (old, gen_rtx_ASHIFTRT (SImode, old, > + gen_lowpart (QImode, shift))); > + > + emit_move_insn (operands[0], gen_lowpart (mode, old)); > + DONE; > +}) > + > +(define_insn "subword_atomic_exchange_strong" > + [(set (match_operand:SI 0 "register_operand" "=&r") ;; old value at mem > + (match_operand:SI 1 "memory_operand" "+A")) ;; mem location > + (set (match_dup 1) > + (unspec_volatile:SI > + [(match_operand:SI 2 "reg_or_0_operand" "rI") ;; value > + (match_operand:SI 3 "reg_or_0_operand" "rI")] ;; not_mask > + UNSPEC_SYNC_EXCHANGE_SUBWORD)) > + (clobber (match_scratch:SI 4 "=&r"))] ;; tmp_1 > + "TARGET_ATOMIC && TARGET_INLINE_SUBWORD_ATOMIC" > + { > + return "1:\;" > + "lr.w.aq\t%0, %1\;" > + "and\t%4, %0, %3\;" > + "or\t%4, %4, %2\;" > + "sc.w.rl\t%4, %4, %1\;" > + "bnez\t%4, 1b"; > + } > + [(set (attr "length") (const_int 20))]) > + > (define_insn "atomic_cas_value_strong" > [(set (match_operand:GPR 0 "register_operand" "=&r") > (match_operand:GPR 1 "memory_operand" "+A")) > @@ -153,6 +346,127 @@ > DONE; > }) > > +(define_expand "atomic_compare_and_swap" > + [(match_operand:SI 0 "register_operand" "") ;; bool output > + (match_operand:SHORT 1 "register_operand" "") ;; val output > + (match_operand:SHORT 2 "memory_operand" "") ;; memory > + (match_operand:SHORT 3 "reg_or_0_operand" "") ;; expected value > + (match_operand:SHORT 4 "reg_or_0_operand" "") ;; desired value > + (match_operand:SI 5 "const_int_operand" "") ;; is_weak > + (match_operand:SI 6 "const_int_operand" "") ;; mod_s > + (match_operand:SI 7 "const_int_operand" "")] ;; mod_f > + "TARGET_ATOMIC && TARGET_INLINE_SUBWORD_ATOMIC" > +{ > + emit_insn (gen_atomic_cas_value_strong (operands[1], operands[2], > + operands[3], operands[4], > + operands[6], operands[7])); > + > + rtx val = gen_reg_rtx (SImode); > + if (operands[1] != const0_rtx) > + emit_move_insn (val, gen_rtx_SIGN_EXTEND (SImode, operands[1])); > + else > + emit_move_insn (val, const0_rtx); > + > + rtx exp = gen_reg_rtx (SImode); > + if (operands[3] != const0_rtx) > + emit_move_insn (exp, gen_rtx_SIGN_EXTEND (SImode, operands[3])); > + else > + emit_move_insn (exp, const0_rtx); > + > + rtx compare = val; > + if (exp != const0_rtx) > + { > + rtx difference = gen_rtx_MINUS (SImode, val, exp); > + compare = gen_reg_rtx (SImode); > + emit_move_insn (compare, difference); > + } > + > + if (word_mode != SImode) > + { > + rtx reg = gen_reg_rtx (word_mode); > + emit_move_insn (reg, gen_rtx_SIGN_EXTEND (word_mode, compare)); > + compare = reg; > + } > + > + emit_move_insn (operands[0], gen_rtx_EQ (SImode, compare, const0_rtx)); > + DONE; > +}) > + > +(define_expand "atomic_cas_value_strong" > + [(set (match_operand:SHORT 0 "register_operand" "=&r") ;; val output > + (match_operand:SHORT 1 "memory_operand" "+A")) ;; memory > + (set (match_dup 1) > + (unspec_volatile:SHORT [(match_operand:SHORT 2 "reg_or_0_operand" "rJ") ;; expected val > + (match_operand:SHORT 3 "reg_or_0_operand" "rJ") ;; desired val > + (match_operand:SI 4 "const_int_operand") ;; mod_s > + (match_operand:SI 5 "const_int_operand")] ;; mod_f > + UNSPEC_COMPARE_AND_SWAP_SUBWORD)) > + (clobber (match_scratch:SHORT 6 "=&r"))] > + "TARGET_ATOMIC && TARGET_INLINE_SUBWORD_ATOMIC" > +{ > + /* We have no QImode/HImode atomics, so form a mask, then use > + subword_atomic_cas_strong to implement a LR/SC version of the > + operation. */ > + > + /* Logic duplicated in gcc/libgcc/config/riscv/atomic.c for use when inlining > + is disabled */ > + > + rtx old = gen_reg_rtx (SImode); > + rtx mem = operands[1]; > + rtx aligned_mem = gen_reg_rtx (SImode); > + rtx shift = gen_reg_rtx (SImode); > + rtx mask = gen_reg_rtx (SImode); > + rtx not_mask = gen_reg_rtx (SImode); > + > + riscv_subword_address (mem, &aligned_mem, &shift, &mask, ¬_mask); > + > + rtx o = operands[2]; > + rtx n = operands[3]; > + rtx shifted_o = gen_reg_rtx (SImode); > + rtx shifted_n = gen_reg_rtx (SImode); > + > + riscv_lshift_subword (mode, o, shift, &shifted_o); > + riscv_lshift_subword (mode, n, shift, &shifted_n); > + > + emit_move_insn (shifted_o, gen_rtx_AND (SImode, shifted_o, mask)); > + emit_move_insn (shifted_n, gen_rtx_AND (SImode, shifted_n, mask)); > + > + emit_insn (gen_subword_atomic_cas_strong (old, aligned_mem, > + shifted_o, shifted_n, > + mask, not_mask)); > + > + emit_move_insn (old, gen_rtx_ASHIFTRT (SImode, old, > + gen_lowpart (QImode, shift))); > + > + emit_move_insn (operands[0], gen_lowpart(mode, old)); > + > + DONE; > +}) > + > +(define_insn "subword_atomic_cas_strong" > + [(set (match_operand:SI 0 "register_operand" "=&r") ;; old value at mem > + (match_operand:SI 1 "memory_operand" "+A")) ;; mem location > + (set (match_dup 1) > + (unspec_volatile:SI [(match_operand:SI 2 "reg_or_0_operand" "rJ") ;; o > + (match_operand:SI 3 "reg_or_0_operand" "rJ")] ;; n > + UNSPEC_COMPARE_AND_SWAP_SUBWORD)) > + (match_operand:SI 4 "register_operand" "rI") ;; mask > + (match_operand:SI 5 "register_operand" "rI") ;; not_mask > + (clobber (match_scratch:SI 6 "=&r"))] ;; tmp_1 > + "TARGET_ATOMIC && TARGET_INLINE_SUBWORD_ATOMIC" > + { > + return "1:\;" > + "lr.w.aq\t%0, %1\;" > + "and\t%6, %0, %4\;" > + "bne\t%6, %z2, 1f\;" > + "and\t%6, %0, %5\;" > + "or\t%6, %6, %3\;" > + "sc.w.rl\t%6, %6, %1\;" > + "bnez\t%6, 1b\;" > + "1:"; > + } > + [(set (attr "length") (const_int 28))]) > + > (define_expand "atomic_test_and_set" > [(match_operand:QI 0 "register_operand" "") ;; bool output > (match_operand:QI 1 "memory_operand" "+A") ;; memory > diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi > index a38547f53e5..ba448dcb7ef 100644 > --- a/gcc/doc/invoke.texi > +++ b/gcc/doc/invoke.texi > @@ -1226,7 +1226,8 @@ See RS/6000 and PowerPC Options. > -mbig-endian -mlittle-endian > -mstack-protector-guard=@var{guard} -mstack-protector-guard-reg=@var{reg} > -mstack-protector-guard-offset=@var{offset} > --mcsr-check -mno-csr-check} > +-mcsr-check -mno-csr-check > +-minline-atomics -mno-inline-atomics} > > @emph{RL78 Options} > @gccoptlist{-msim -mmul=none -mmul=g13 -mmul=g14 -mallregs > @@ -29006,6 +29007,13 @@ Do or don't use smaller but slower prologue and epilogue code that uses > library function calls. The default is to use fast inline prologues and > epilogues. > > +@opindex minline-atomics > +@item -minline-atomics > +@itemx -mno-inline-atomics > +Do or don't use smaller but slower subword atomic emulation code that uses > +libatomic function calls. The default is to use fast inline subword atomics > +that do not require libatomic. > + > @opindex mshorten-memrefs > @item -mshorten-memrefs > @itemx -mno-shorten-memrefs > diff --git a/gcc/testsuite/gcc.target/riscv/inline-atomics-1.c b/gcc/testsuite/gcc.target/riscv/inline-atomics-1.c > new file mode 100644 > index 00000000000..5c5623d9b2f > --- /dev/null > +++ b/gcc/testsuite/gcc.target/riscv/inline-atomics-1.c > @@ -0,0 +1,18 @@ > +/* { dg-do compile } */ > +/* { dg-options "-mno-inline-atomics" } */ > +/* { dg-message "note: '__sync_fetch_and_nand' changed semantics in GCC 4.4" "fetch_and_nand" { target *-*-* } 0 } */ > +/* { dg-final { scan-assembler "\tcall\t__sync_fetch_and_add_1" } } */ > +/* { dg-final { scan-assembler "\tcall\t__sync_fetch_and_nand_1" } } */ > +/* { dg-final { scan-assembler "\tcall\t__sync_bool_compare_and_swap_1" } } */ > + > +char foo; > +char bar; > +char baz; > + > +int > +main () > +{ > + __sync_fetch_and_add(&foo, 1); > + __sync_fetch_and_nand(&bar, 1); > + __sync_bool_compare_and_swap (&baz, 1, 2); > +} > diff --git a/gcc/testsuite/gcc.target/riscv/inline-atomics-2.c b/gcc/testsuite/gcc.target/riscv/inline-atomics-2.c > new file mode 100644 > index 00000000000..fdce7a5d71f > --- /dev/null > +++ b/gcc/testsuite/gcc.target/riscv/inline-atomics-2.c > @@ -0,0 +1,19 @@ > +/* { dg-do compile } */ > +/* Verify that subword atomics do not generate calls. */ > +/* { dg-options "-minline-atomics" } */ > +/* { dg-message "note: '__sync_fetch_and_nand' changed semantics in GCC 4.4" "fetch_and_nand" { target *-*-* } 0 } */ > +/* { dg-final { scan-assembler-not "\tcall\t__sync_fetch_and_add_1" } } */ > +/* { dg-final { scan-assembler-not "\tcall\t__sync_fetch_and_nand_1" } } */ > +/* { dg-final { scan-assembler-not "\tcall\t__sync_bool_compare_and_swap_1" } } */ > + > +char foo; > +char bar; > +char baz; > + > +int > +main () > +{ > + __sync_fetch_and_add(&foo, 1); > + __sync_fetch_and_nand(&bar, 1); > + __sync_bool_compare_and_swap (&baz, 1, 2); > +} > diff --git a/gcc/testsuite/gcc.target/riscv/inline-atomics-3.c b/gcc/testsuite/gcc.target/riscv/inline-atomics-3.c > new file mode 100644 > index 00000000000..709f3734377 > --- /dev/null > +++ b/gcc/testsuite/gcc.target/riscv/inline-atomics-3.c > @@ -0,0 +1,569 @@ > +/* Check all char alignments. */ > +/* Duplicate logic as libatomic/testsuite/libatomic.c/atomic-op-1.c */ > +/* Test __atomic routines for existence and proper execution on 1 byte > + values with each valid memory model. */ > +/* { dg-do run } */ > +/* { dg-options "-minline-atomics -Wno-address-of-packed-member" } */ > + > +/* Test the execution of the __atomic_*OP builtin routines for a char. */ > + > +extern void abort(void); > + > +char count, res; > +const char init = ~0; > + > +struct A > +{ > + char a; > + char b; > + char c; > + char d; > +} __attribute__ ((packed)) A; > + > +/* The fetch_op routines return the original value before the operation. */ > + > +void > +test_fetch_add (char* v) > +{ > + *v = 0; > + count = 1; > + > + if (__atomic_fetch_add (v, count, __ATOMIC_RELAXED) != 0) > + abort (); > + > + if (__atomic_fetch_add (v, 1, __ATOMIC_CONSUME) != 1) > + abort (); > + > + if (__atomic_fetch_add (v, count, __ATOMIC_ACQUIRE) != 2) > + abort (); > + > + if (__atomic_fetch_add (v, 1, __ATOMIC_RELEASE) != 3) > + abort (); > + > + if (__atomic_fetch_add (v, count, __ATOMIC_ACQ_REL) != 4) > + abort (); > + > + if (__atomic_fetch_add (v, 1, __ATOMIC_SEQ_CST) != 5) > + abort (); > +} > + > + > +void > +test_fetch_sub (char* v) > +{ > + *v = res = 20; > + count = 0; > + > + if (__atomic_fetch_sub (v, count + 1, __ATOMIC_RELAXED) != res--) > + abort (); > + > + if (__atomic_fetch_sub (v, 1, __ATOMIC_CONSUME) != res--) > + abort (); > + > + if (__atomic_fetch_sub (v, count + 1, __ATOMIC_ACQUIRE) != res--) > + abort (); > + > + if (__atomic_fetch_sub (v, 1, __ATOMIC_RELEASE) != res--) > + abort (); > + > + if (__atomic_fetch_sub (v, count + 1, __ATOMIC_ACQ_REL) != res--) > + abort (); > + > + if (__atomic_fetch_sub (v, 1, __ATOMIC_SEQ_CST) != res--) > + abort (); > +} > + > +void > +test_fetch_and (char* v) > +{ > + *v = init; > + > + if (__atomic_fetch_and (v, 0, __ATOMIC_RELAXED) != init) > + abort (); > + > + if (__atomic_fetch_and (v, init, __ATOMIC_CONSUME) != 0) > + abort (); > + > + if (__atomic_fetch_and (v, 0, __ATOMIC_ACQUIRE) != 0) > + abort (); > + > + *v = ~*v; > + if (__atomic_fetch_and (v, init, __ATOMIC_RELEASE) != init) > + abort (); > + > + if (__atomic_fetch_and (v, 0, __ATOMIC_ACQ_REL) != init) > + abort (); > + > + if (__atomic_fetch_and (v, 0, __ATOMIC_SEQ_CST) != 0) > + abort (); > +} > + > +void > +test_fetch_nand (char* v) > +{ > + *v = init; > + > + if (__atomic_fetch_nand (v, 0, __ATOMIC_RELAXED) != init) > + abort (); > + > + if (__atomic_fetch_nand (v, init, __ATOMIC_CONSUME) != init) > + abort (); > + > + if (__atomic_fetch_nand (v, 0, __ATOMIC_ACQUIRE) != 0 ) > + abort (); > + > + if (__atomic_fetch_nand (v, init, __ATOMIC_RELEASE) != init) > + abort (); > + > + if (__atomic_fetch_nand (v, init, __ATOMIC_ACQ_REL) != 0) > + abort (); > + > + if (__atomic_fetch_nand (v, 0, __ATOMIC_SEQ_CST) != init) > + abort (); > +} > + > +void > +test_fetch_xor (char* v) > +{ > + *v = init; > + count = 0; > + > + if (__atomic_fetch_xor (v, count, __ATOMIC_RELAXED) != init) > + abort (); > + > + if (__atomic_fetch_xor (v, ~count, __ATOMIC_CONSUME) != init) > + abort (); > + > + if (__atomic_fetch_xor (v, 0, __ATOMIC_ACQUIRE) != 0) > + abort (); > + > + if (__atomic_fetch_xor (v, ~count, __ATOMIC_RELEASE) != 0) > + abort (); > + > + if (__atomic_fetch_xor (v, 0, __ATOMIC_ACQ_REL) != init) > + abort (); > + > + if (__atomic_fetch_xor (v, ~count, __ATOMIC_SEQ_CST) != init) > + abort (); > +} > + > +void > +test_fetch_or (char* v) > +{ > + *v = 0; > + count = 1; > + > + if (__atomic_fetch_or (v, count, __ATOMIC_RELAXED) != 0) > + abort (); > + > + count *= 2; > + if (__atomic_fetch_or (v, 2, __ATOMIC_CONSUME) != 1) > + abort (); > + > + count *= 2; > + if (__atomic_fetch_or (v, count, __ATOMIC_ACQUIRE) != 3) > + abort (); > + > + count *= 2; > + if (__atomic_fetch_or (v, 8, __ATOMIC_RELEASE) != 7) > + abort (); > + > + count *= 2; > + if (__atomic_fetch_or (v, count, __ATOMIC_ACQ_REL) != 15) > + abort (); > + > + count *= 2; > + if (__atomic_fetch_or (v, count, __ATOMIC_SEQ_CST) != 31) > + abort (); > +} > + > +/* The OP_fetch routines return the new value after the operation. */ > + > +void > +test_add_fetch (char* v) > +{ > + *v = 0; > + count = 1; > + > + if (__atomic_add_fetch (v, count, __ATOMIC_RELAXED) != 1) > + abort (); > + > + if (__atomic_add_fetch (v, 1, __ATOMIC_CONSUME) != 2) > + abort (); > + > + if (__atomic_add_fetch (v, count, __ATOMIC_ACQUIRE) != 3) > + abort (); > + > + if (__atomic_add_fetch (v, 1, __ATOMIC_RELEASE) != 4) > + abort (); > + > + if (__atomic_add_fetch (v, count, __ATOMIC_ACQ_REL) != 5) > + abort (); > + > + if (__atomic_add_fetch (v, count, __ATOMIC_SEQ_CST) != 6) > + abort (); > +} > + > + > +void > +test_sub_fetch (char* v) > +{ > + *v = res = 20; > + count = 0; > + > + if (__atomic_sub_fetch (v, count + 1, __ATOMIC_RELAXED) != --res) > + abort (); > + > + if (__atomic_sub_fetch (v, 1, __ATOMIC_CONSUME) != --res) > + abort (); > + > + if (__atomic_sub_fetch (v, count + 1, __ATOMIC_ACQUIRE) != --res) > + abort (); > + > + if (__atomic_sub_fetch (v, 1, __ATOMIC_RELEASE) != --res) > + abort (); > + > + if (__atomic_sub_fetch (v, count + 1, __ATOMIC_ACQ_REL) != --res) > + abort (); > + > + if (__atomic_sub_fetch (v, count + 1, __ATOMIC_SEQ_CST) != --res) > + abort (); > +} > + > +void > +test_and_fetch (char* v) > +{ > + *v = init; > + > + if (__atomic_and_fetch (v, 0, __ATOMIC_RELAXED) != 0) > + abort (); > + > + *v = init; > + if (__atomic_and_fetch (v, init, __ATOMIC_CONSUME) != init) > + abort (); > + > + if (__atomic_and_fetch (v, 0, __ATOMIC_ACQUIRE) != 0) > + abort (); > + > + *v = ~*v; > + if (__atomic_and_fetch (v, init, __ATOMIC_RELEASE) != init) > + abort (); > + > + if (__atomic_and_fetch (v, 0, __ATOMIC_ACQ_REL) != 0) > + abort (); > + > + *v = ~*v; > + if (__atomic_and_fetch (v, 0, __ATOMIC_SEQ_CST) != 0) > + abort (); > +} > + > +void > +test_nand_fetch (char* v) > +{ > + *v = init; > + > + if (__atomic_nand_fetch (v, 0, __ATOMIC_RELAXED) != init) > + abort (); > + > + if (__atomic_nand_fetch (v, init, __ATOMIC_CONSUME) != 0) > + abort (); > + > + if (__atomic_nand_fetch (v, 0, __ATOMIC_ACQUIRE) != init) > + abort (); > + > + if (__atomic_nand_fetch (v, init, __ATOMIC_RELEASE) != 0) > + abort (); > + > + if (__atomic_nand_fetch (v, init, __ATOMIC_ACQ_REL) != init) > + abort (); > + > + if (__atomic_nand_fetch (v, 0, __ATOMIC_SEQ_CST) != init) > + abort (); > +} > + > + > + > +void > +test_xor_fetch (char* v) > +{ > + *v = init; > + count = 0; > + > + if (__atomic_xor_fetch (v, count, __ATOMIC_RELAXED) != init) > + abort (); > + > + if (__atomic_xor_fetch (v, ~count, __ATOMIC_CONSUME) != 0) > + abort (); > + > + if (__atomic_xor_fetch (v, 0, __ATOMIC_ACQUIRE) != 0) > + abort (); > + > + if (__atomic_xor_fetch (v, ~count, __ATOMIC_RELEASE) != init) > + abort (); > + > + if (__atomic_xor_fetch (v, 0, __ATOMIC_ACQ_REL) != init) > + abort (); > + > + if (__atomic_xor_fetch (v, ~count, __ATOMIC_SEQ_CST) != 0) > + abort (); > +} > + > +void > +test_or_fetch (char* v) > +{ > + *v = 0; > + count = 1; > + > + if (__atomic_or_fetch (v, count, __ATOMIC_RELAXED) != 1) > + abort (); > + > + count *= 2; > + if (__atomic_or_fetch (v, 2, __ATOMIC_CONSUME) != 3) > + abort (); > + > + count *= 2; > + if (__atomic_or_fetch (v, count, __ATOMIC_ACQUIRE) != 7) > + abort (); > + > + count *= 2; > + if (__atomic_or_fetch (v, 8, __ATOMIC_RELEASE) != 15) > + abort (); > + > + count *= 2; > + if (__atomic_or_fetch (v, count, __ATOMIC_ACQ_REL) != 31) > + abort (); > + > + count *= 2; > + if (__atomic_or_fetch (v, count, __ATOMIC_SEQ_CST) != 63) > + abort (); > +} > + > + > +/* Test the OP routines with a result which isn't used. Use both variations > + within each function. */ > + > +void > +test_add (char* v) > +{ > + *v = 0; > + count = 1; > + > + __atomic_add_fetch (v, count, __ATOMIC_RELAXED); > + if (*v != 1) > + abort (); > + > + __atomic_fetch_add (v, count, __ATOMIC_CONSUME); > + if (*v != 2) > + abort (); > + > + __atomic_add_fetch (v, 1 , __ATOMIC_ACQUIRE); > + if (*v != 3) > + abort (); > + > + __atomic_fetch_add (v, 1, __ATOMIC_RELEASE); > + if (*v != 4) > + abort (); > + > + __atomic_add_fetch (v, count, __ATOMIC_ACQ_REL); > + if (*v != 5) > + abort (); > + > + __atomic_fetch_add (v, count, __ATOMIC_SEQ_CST); > + if (*v != 6) > + abort (); > +} > + > + > +void > +test_sub (char* v) > +{ > + *v = res = 20; > + count = 0; > + > + __atomic_sub_fetch (v, count + 1, __ATOMIC_RELAXED); > + if (*v != --res) > + abort (); > + > + __atomic_fetch_sub (v, count + 1, __ATOMIC_CONSUME); > + if (*v != --res) > + abort (); > + > + __atomic_sub_fetch (v, 1, __ATOMIC_ACQUIRE); > + if (*v != --res) > + abort (); > + > + __atomic_fetch_sub (v, 1, __ATOMIC_RELEASE); > + if (*v != --res) > + abort (); > + > + __atomic_sub_fetch (v, count + 1, __ATOMIC_ACQ_REL); > + if (*v != --res) > + abort (); > + > + __atomic_fetch_sub (v, count + 1, __ATOMIC_SEQ_CST); > + if (*v != --res) > + abort (); > +} > + > +void > +test_and (char* v) > +{ > + *v = init; > + > + __atomic_and_fetch (v, 0, __ATOMIC_RELAXED); > + if (*v != 0) > + abort (); > + > + *v = init; > + __atomic_fetch_and (v, init, __ATOMIC_CONSUME); > + if (*v != init) > + abort (); > + > + __atomic_and_fetch (v, 0, __ATOMIC_ACQUIRE); > + if (*v != 0) > + abort (); > + > + *v = ~*v; > + __atomic_fetch_and (v, init, __ATOMIC_RELEASE); > + if (*v != init) > + abort (); > + > + __atomic_and_fetch (v, 0, __ATOMIC_ACQ_REL); > + if (*v != 0) > + abort (); > + > + *v = ~*v; > + __atomic_fetch_and (v, 0, __ATOMIC_SEQ_CST); > + if (*v != 0) > + abort (); > +} > + > +void > +test_nand (char* v) > +{ > + *v = init; > + > + __atomic_fetch_nand (v, 0, __ATOMIC_RELAXED); > + if (*v != init) > + abort (); > + > + __atomic_fetch_nand (v, init, __ATOMIC_CONSUME); > + if (*v != 0) > + abort (); > + > + __atomic_nand_fetch (v, 0, __ATOMIC_ACQUIRE); > + if (*v != init) > + abort (); > + > + __atomic_nand_fetch (v, init, __ATOMIC_RELEASE); > + if (*v != 0) > + abort (); > + > + __atomic_fetch_nand (v, init, __ATOMIC_ACQ_REL); > + if (*v != init) > + abort (); > + > + __atomic_nand_fetch (v, 0, __ATOMIC_SEQ_CST); > + if (*v != init) > + abort (); > +} > + > + > + > +void > +test_xor (char* v) > +{ > + *v = init; > + count = 0; > + > + __atomic_xor_fetch (v, count, __ATOMIC_RELAXED); > + if (*v != init) > + abort (); > + > + __atomic_fetch_xor (v, ~count, __ATOMIC_CONSUME); > + if (*v != 0) > + abort (); > + > + __atomic_xor_fetch (v, 0, __ATOMIC_ACQUIRE); > + if (*v != 0) > + abort (); > + > + __atomic_fetch_xor (v, ~count, __ATOMIC_RELEASE); > + if (*v != init) > + abort (); > + > + __atomic_fetch_xor (v, 0, __ATOMIC_ACQ_REL); > + if (*v != init) > + abort (); > + > + __atomic_xor_fetch (v, ~count, __ATOMIC_SEQ_CST); > + if (*v != 0) > + abort (); > +} > + > +void > +test_or (char* v) > +{ > + *v = 0; > + count = 1; > + > + __atomic_or_fetch (v, count, __ATOMIC_RELAXED); > + if (*v != 1) > + abort (); > + > + count *= 2; > + __atomic_fetch_or (v, count, __ATOMIC_CONSUME); > + if (*v != 3) > + abort (); > + > + count *= 2; > + __atomic_or_fetch (v, 4, __ATOMIC_ACQUIRE); > + if (*v != 7) > + abort (); > + > + count *= 2; > + __atomic_fetch_or (v, 8, __ATOMIC_RELEASE); > + if (*v != 15) > + abort (); > + > + count *= 2; > + __atomic_or_fetch (v, count, __ATOMIC_ACQ_REL); > + if (*v != 31) > + abort (); > + > + count *= 2; > + __atomic_fetch_or (v, count, __ATOMIC_SEQ_CST); > + if (*v != 63) > + abort (); > +} > + > +int > +main () > +{ > + char* V[] = {&A.a, &A.b, &A.c, &A.d}; > + > + for (int i = 0; i < 4; i++) { > + test_fetch_add (V[i]); > + test_fetch_sub (V[i]); > + test_fetch_and (V[i]); > + test_fetch_nand (V[i]); > + test_fetch_xor (V[i]); > + test_fetch_or (V[i]); > + > + test_add_fetch (V[i]); > + test_sub_fetch (V[i]); > + test_and_fetch (V[i]); > + test_nand_fetch (V[i]); > + test_xor_fetch (V[i]); > + test_or_fetch (V[i]); > + > + test_add (V[i]); > + test_sub (V[i]); > + test_and (V[i]); > + test_nand (V[i]); > + test_xor (V[i]); > + test_or (V[i]); > + } > + > + return 0; > +} > diff --git a/gcc/testsuite/gcc.target/riscv/inline-atomics-4.c b/gcc/testsuite/gcc.target/riscv/inline-atomics-4.c > new file mode 100644 > index 00000000000..eecfaae5cc6 > --- /dev/null > +++ b/gcc/testsuite/gcc.target/riscv/inline-atomics-4.c > @@ -0,0 +1,566 @@ > +/* Check all short alignments. */ > +/* Duplicate logic as libatomic/testsuite/libatomic.c/atomic-op-2.c */ > +/* Test __atomic routines for existence and proper execution on 2 byte > + values with each valid memory model. */ > +/* { dg-do run } */ > +/* { dg-options "-minline-atomics -Wno-address-of-packed-member" } */ > + > +/* Test the execution of the __atomic_*OP builtin routines for a short. */ > + > +extern void abort(void); > + > +short count, res; > +const short init = ~0; > + > +struct A > +{ > + short a; > + short b; > +} __attribute__ ((packed)) A; > + > +/* The fetch_op routines return the original value before the operation. */ > + > +void > +test_fetch_add (short* v) > +{ > + *v = 0; > + count = 1; > + > + if (__atomic_fetch_add (v, count, __ATOMIC_RELAXED) != 0) > + abort (); > + > + if (__atomic_fetch_add (v, 1, __ATOMIC_CONSUME) != 1) > + abort (); > + > + if (__atomic_fetch_add (v, count, __ATOMIC_ACQUIRE) != 2) > + abort (); > + > + if (__atomic_fetch_add (v, 1, __ATOMIC_RELEASE) != 3) > + abort (); > + > + if (__atomic_fetch_add (v, count, __ATOMIC_ACQ_REL) != 4) > + abort (); > + > + if (__atomic_fetch_add (v, 1, __ATOMIC_SEQ_CST) != 5) > + abort (); > +} > + > + > +void > +test_fetch_sub (short* v) > +{ > + *v = res = 20; > + count = 0; > + > + if (__atomic_fetch_sub (v, count + 1, __ATOMIC_RELAXED) != res--) > + abort (); > + > + if (__atomic_fetch_sub (v, 1, __ATOMIC_CONSUME) != res--) > + abort (); > + > + if (__atomic_fetch_sub (v, count + 1, __ATOMIC_ACQUIRE) != res--) > + abort (); > + > + if (__atomic_fetch_sub (v, 1, __ATOMIC_RELEASE) != res--) > + abort (); > + > + if (__atomic_fetch_sub (v, count + 1, __ATOMIC_ACQ_REL) != res--) > + abort (); > + > + if (__atomic_fetch_sub (v, 1, __ATOMIC_SEQ_CST) != res--) > + abort (); > +} > + > +void > +test_fetch_and (short* v) > +{ > + *v = init; > + > + if (__atomic_fetch_and (v, 0, __ATOMIC_RELAXED) != init) > + abort (); > + > + if (__atomic_fetch_and (v, init, __ATOMIC_CONSUME) != 0) > + abort (); > + > + if (__atomic_fetch_and (v, 0, __ATOMIC_ACQUIRE) != 0) > + abort (); > + > + *v = ~*v; > + if (__atomic_fetch_and (v, init, __ATOMIC_RELEASE) != init) > + abort (); > + > + if (__atomic_fetch_and (v, 0, __ATOMIC_ACQ_REL) != init) > + abort (); > + > + if (__atomic_fetch_and (v, 0, __ATOMIC_SEQ_CST) != 0) > + abort (); > +} > + > +void > +test_fetch_nand (short* v) > +{ > + *v = init; > + > + if (__atomic_fetch_nand (v, 0, __ATOMIC_RELAXED) != init) > + abort (); > + > + if (__atomic_fetch_nand (v, init, __ATOMIC_CONSUME) != init) > + abort (); > + > + if (__atomic_fetch_nand (v, 0, __ATOMIC_ACQUIRE) != 0 ) > + abort (); > + > + if (__atomic_fetch_nand (v, init, __ATOMIC_RELEASE) != init) > + abort (); > + > + if (__atomic_fetch_nand (v, init, __ATOMIC_ACQ_REL) != 0) > + abort (); > + > + if (__atomic_fetch_nand (v, 0, __ATOMIC_SEQ_CST) != init) > + abort (); > +} > + > +void > +test_fetch_xor (short* v) > +{ > + *v = init; > + count = 0; > + > + if (__atomic_fetch_xor (v, count, __ATOMIC_RELAXED) != init) > + abort (); > + > + if (__atomic_fetch_xor (v, ~count, __ATOMIC_CONSUME) != init) > + abort (); > + > + if (__atomic_fetch_xor (v, 0, __ATOMIC_ACQUIRE) != 0) > + abort (); > + > + if (__atomic_fetch_xor (v, ~count, __ATOMIC_RELEASE) != 0) > + abort (); > + > + if (__atomic_fetch_xor (v, 0, __ATOMIC_ACQ_REL) != init) > + abort (); > + > + if (__atomic_fetch_xor (v, ~count, __ATOMIC_SEQ_CST) != init) > + abort (); > +} > + > +void > +test_fetch_or (short* v) > +{ > + *v = 0; > + count = 1; > + > + if (__atomic_fetch_or (v, count, __ATOMIC_RELAXED) != 0) > + abort (); > + > + count *= 2; > + if (__atomic_fetch_or (v, 2, __ATOMIC_CONSUME) != 1) > + abort (); > + > + count *= 2; > + if (__atomic_fetch_or (v, count, __ATOMIC_ACQUIRE) != 3) > + abort (); > + > + count *= 2; > + if (__atomic_fetch_or (v, 8, __ATOMIC_RELEASE) != 7) > + abort (); > + > + count *= 2; > + if (__atomic_fetch_or (v, count, __ATOMIC_ACQ_REL) != 15) > + abort (); > + > + count *= 2; > + if (__atomic_fetch_or (v, count, __ATOMIC_SEQ_CST) != 31) > + abort (); > +} > + > +/* The OP_fetch routines return the new value after the operation. */ > + > +void > +test_add_fetch (short* v) > +{ > + *v = 0; > + count = 1; > + > + if (__atomic_add_fetch (v, count, __ATOMIC_RELAXED) != 1) > + abort (); > + > + if (__atomic_add_fetch (v, 1, __ATOMIC_CONSUME) != 2) > + abort (); > + > + if (__atomic_add_fetch (v, count, __ATOMIC_ACQUIRE) != 3) > + abort (); > + > + if (__atomic_add_fetch (v, 1, __ATOMIC_RELEASE) != 4) > + abort (); > + > + if (__atomic_add_fetch (v, count, __ATOMIC_ACQ_REL) != 5) > + abort (); > + > + if (__atomic_add_fetch (v, count, __ATOMIC_SEQ_CST) != 6) > + abort (); > +} > + > + > +void > +test_sub_fetch (short* v) > +{ > + *v = res = 20; > + count = 0; > + > + if (__atomic_sub_fetch (v, count + 1, __ATOMIC_RELAXED) != --res) > + abort (); > + > + if (__atomic_sub_fetch (v, 1, __ATOMIC_CONSUME) != --res) > + abort (); > + > + if (__atomic_sub_fetch (v, count + 1, __ATOMIC_ACQUIRE) != --res) > + abort (); > + > + if (__atomic_sub_fetch (v, 1, __ATOMIC_RELEASE) != --res) > + abort (); > + > + if (__atomic_sub_fetch (v, count + 1, __ATOMIC_ACQ_REL) != --res) > + abort (); > + > + if (__atomic_sub_fetch (v, count + 1, __ATOMIC_SEQ_CST) != --res) > + abort (); > +} > + > +void > +test_and_fetch (short* v) > +{ > + *v = init; > + > + if (__atomic_and_fetch (v, 0, __ATOMIC_RELAXED) != 0) > + abort (); > + > + *v = init; > + if (__atomic_and_fetch (v, init, __ATOMIC_CONSUME) != init) > + abort (); > + > + if (__atomic_and_fetch (v, 0, __ATOMIC_ACQUIRE) != 0) > + abort (); > + > + *v = ~*v; > + if (__atomic_and_fetch (v, init, __ATOMIC_RELEASE) != init) > + abort (); > + > + if (__atomic_and_fetch (v, 0, __ATOMIC_ACQ_REL) != 0) > + abort (); > + > + *v = ~*v; > + if (__atomic_and_fetch (v, 0, __ATOMIC_SEQ_CST) != 0) > + abort (); > +} > + > +void > +test_nand_fetch (short* v) > +{ > + *v = init; > + > + if (__atomic_nand_fetch (v, 0, __ATOMIC_RELAXED) != init) > + abort (); > + > + if (__atomic_nand_fetch (v, init, __ATOMIC_CONSUME) != 0) > + abort (); > + > + if (__atomic_nand_fetch (v, 0, __ATOMIC_ACQUIRE) != init) > + abort (); > + > + if (__atomic_nand_fetch (v, init, __ATOMIC_RELEASE) != 0) > + abort (); > + > + if (__atomic_nand_fetch (v, init, __ATOMIC_ACQ_REL) != init) > + abort (); > + > + if (__atomic_nand_fetch (v, 0, __ATOMIC_SEQ_CST) != init) > + abort (); > +} > + > + > + > +void > +test_xor_fetch (short* v) > +{ > + *v = init; > + count = 0; > + > + if (__atomic_xor_fetch (v, count, __ATOMIC_RELAXED) != init) > + abort (); > + > + if (__atomic_xor_fetch (v, ~count, __ATOMIC_CONSUME) != 0) > + abort (); > + > + if (__atomic_xor_fetch (v, 0, __ATOMIC_ACQUIRE) != 0) > + abort (); > + > + if (__atomic_xor_fetch (v, ~count, __ATOMIC_RELEASE) != init) > + abort (); > + > + if (__atomic_xor_fetch (v, 0, __ATOMIC_ACQ_REL) != init) > + abort (); > + > + if (__atomic_xor_fetch (v, ~count, __ATOMIC_SEQ_CST) != 0) > + abort (); > +} > + > +void > +test_or_fetch (short* v) > +{ > + *v = 0; > + count = 1; > + > + if (__atomic_or_fetch (v, count, __ATOMIC_RELAXED) != 1) > + abort (); > + > + count *= 2; > + if (__atomic_or_fetch (v, 2, __ATOMIC_CONSUME) != 3) > + abort (); > + > + count *= 2; > + if (__atomic_or_fetch (v, count, __ATOMIC_ACQUIRE) != 7) > + abort (); > + > + count *= 2; > + if (__atomic_or_fetch (v, 8, __ATOMIC_RELEASE) != 15) > + abort (); > + > + count *= 2; > + if (__atomic_or_fetch (v, count, __ATOMIC_ACQ_REL) != 31) > + abort (); > + > + count *= 2; > + if (__atomic_or_fetch (v, count, __ATOMIC_SEQ_CST) != 63) > + abort (); > +} > + > + > +/* Test the OP routines with a result which isn't used. Use both variations > + within each function. */ > + > +void > +test_add (short* v) > +{ > + *v = 0; > + count = 1; > + > + __atomic_add_fetch (v, count, __ATOMIC_RELAXED); > + if (*v != 1) > + abort (); > + > + __atomic_fetch_add (v, count, __ATOMIC_CONSUME); > + if (*v != 2) > + abort (); > + > + __atomic_add_fetch (v, 1 , __ATOMIC_ACQUIRE); > + if (*v != 3) > + abort (); > + > + __atomic_fetch_add (v, 1, __ATOMIC_RELEASE); > + if (*v != 4) > + abort (); > + > + __atomic_add_fetch (v, count, __ATOMIC_ACQ_REL); > + if (*v != 5) > + abort (); > + > + __atomic_fetch_add (v, count, __ATOMIC_SEQ_CST); > + if (*v != 6) > + abort (); > +} > + > + > +void > +test_sub (short* v) > +{ > + *v = res = 20; > + count = 0; > + > + __atomic_sub_fetch (v, count + 1, __ATOMIC_RELAXED); > + if (*v != --res) > + abort (); > + > + __atomic_fetch_sub (v, count + 1, __ATOMIC_CONSUME); > + if (*v != --res) > + abort (); > + > + __atomic_sub_fetch (v, 1, __ATOMIC_ACQUIRE); > + if (*v != --res) > + abort (); > + > + __atomic_fetch_sub (v, 1, __ATOMIC_RELEASE); > + if (*v != --res) > + abort (); > + > + __atomic_sub_fetch (v, count + 1, __ATOMIC_ACQ_REL); > + if (*v != --res) > + abort (); > + > + __atomic_fetch_sub (v, count + 1, __ATOMIC_SEQ_CST); > + if (*v != --res) > + abort (); > +} > + > +void > +test_and (short* v) > +{ > + *v = init; > + > + __atomic_and_fetch (v, 0, __ATOMIC_RELAXED); > + if (*v != 0) > + abort (); > + > + *v = init; > + __atomic_fetch_and (v, init, __ATOMIC_CONSUME); > + if (*v != init) > + abort (); > + > + __atomic_and_fetch (v, 0, __ATOMIC_ACQUIRE); > + if (*v != 0) > + abort (); > + > + *v = ~*v; > + __atomic_fetch_and (v, init, __ATOMIC_RELEASE); > + if (*v != init) > + abort (); > + > + __atomic_and_fetch (v, 0, __ATOMIC_ACQ_REL); > + if (*v != 0) > + abort (); > + > + *v = ~*v; > + __atomic_fetch_and (v, 0, __ATOMIC_SEQ_CST); > + if (*v != 0) > + abort (); > +} > + > +void > +test_nand (short* v) > +{ > + *v = init; > + > + __atomic_fetch_nand (v, 0, __ATOMIC_RELAXED); > + if (*v != init) > + abort (); > + > + __atomic_fetch_nand (v, init, __ATOMIC_CONSUME); > + if (*v != 0) > + abort (); > + > + __atomic_nand_fetch (v, 0, __ATOMIC_ACQUIRE); > + if (*v != init) > + abort (); > + > + __atomic_nand_fetch (v, init, __ATOMIC_RELEASE); > + if (*v != 0) > + abort (); > + > + __atomic_fetch_nand (v, init, __ATOMIC_ACQ_REL); > + if (*v != init) > + abort (); > + > + __atomic_nand_fetch (v, 0, __ATOMIC_SEQ_CST); > + if (*v != init) > + abort (); > +} > + > + > + > +void > +test_xor (short* v) > +{ > + *v = init; > + count = 0; > + > + __atomic_xor_fetch (v, count, __ATOMIC_RELAXED); > + if (*v != init) > + abort (); > + > + __atomic_fetch_xor (v, ~count, __ATOMIC_CONSUME); > + if (*v != 0) > + abort (); > + > + __atomic_xor_fetch (v, 0, __ATOMIC_ACQUIRE); > + if (*v != 0) > + abort (); > + > + __atomic_fetch_xor (v, ~count, __ATOMIC_RELEASE); > + if (*v != init) > + abort (); > + > + __atomic_fetch_xor (v, 0, __ATOMIC_ACQ_REL); > + if (*v != init) > + abort (); > + > + __atomic_xor_fetch (v, ~count, __ATOMIC_SEQ_CST); > + if (*v != 0) > + abort (); > +} > + > +void > +test_or (short* v) > +{ > + *v = 0; > + count = 1; > + > + __atomic_or_fetch (v, count, __ATOMIC_RELAXED); > + if (*v != 1) > + abort (); > + > + count *= 2; > + __atomic_fetch_or (v, count, __ATOMIC_CONSUME); > + if (*v != 3) > + abort (); > + > + count *= 2; > + __atomic_or_fetch (v, 4, __ATOMIC_ACQUIRE); > + if (*v != 7) > + abort (); > + > + count *= 2; > + __atomic_fetch_or (v, 8, __ATOMIC_RELEASE); > + if (*v != 15) > + abort (); > + > + count *= 2; > + __atomic_or_fetch (v, count, __ATOMIC_ACQ_REL); > + if (*v != 31) > + abort (); > + > + count *= 2; > + __atomic_fetch_or (v, count, __ATOMIC_SEQ_CST); > + if (*v != 63) > + abort (); > +} > + > +int > +main () { > + short* V[] = {&A.a, &A.b}; > + > + for (int i = 0; i < 2; i++) { > + test_fetch_add (V[i]); > + test_fetch_sub (V[i]); > + test_fetch_and (V[i]); > + test_fetch_nand (V[i]); > + test_fetch_xor (V[i]); > + test_fetch_or (V[i]); > + > + test_add_fetch (V[i]); > + test_sub_fetch (V[i]); > + test_and_fetch (V[i]); > + test_nand_fetch (V[i]); > + test_xor_fetch (V[i]); > + test_or_fetch (V[i]); > + > + test_add (V[i]); > + test_sub (V[i]); > + test_and (V[i]); > + test_nand (V[i]); > + test_xor (V[i]); > + test_or (V[i]); > + } > + > + return 0; > +} > diff --git a/gcc/testsuite/gcc.target/riscv/inline-atomics-5.c b/gcc/testsuite/gcc.target/riscv/inline-atomics-5.c > new file mode 100644 > index 00000000000..52093894a79 > --- /dev/null > +++ b/gcc/testsuite/gcc.target/riscv/inline-atomics-5.c > @@ -0,0 +1,87 @@ > +/* Test __atomic routines for existence and proper execution on 1 byte > + values with each valid memory model. */ > +/* Duplicate logic as libatomic/testsuite/libatomic.c/atomic-compare-exchange-1.c */ > +/* { dg-do run } */ > +/* { dg-options "-minline-atomics" } */ > + > +/* Test the execution of the __atomic_compare_exchange_n builtin for a char. */ > + > +extern void abort(void); > + > +char v = 0; > +char expected = 0; > +char max = ~0; > +char desired = ~0; > +char zero = 0; > + > +#define STRONG 0 > +#define WEAK 1 > + > +int > +main () > +{ > + > + if (!__atomic_compare_exchange_n (&v, &expected, max, STRONG , __ATOMIC_RELAXED, __ATOMIC_RELAXED)) > + abort (); > + if (expected != 0) > + abort (); > + > + if (__atomic_compare_exchange_n (&v, &expected, 0, STRONG , __ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) > + abort (); > + if (expected != max) > + abort (); > + > + if (!__atomic_compare_exchange_n (&v, &expected, 0, STRONG , __ATOMIC_RELEASE, __ATOMIC_ACQUIRE)) > + abort (); > + if (expected != max) > + abort (); > + if (v != 0) > + abort (); > + > + if (__atomic_compare_exchange_n (&v, &expected, desired, WEAK, __ATOMIC_ACQ_REL, __ATOMIC_ACQUIRE)) > + abort (); > + if (expected != 0) > + abort (); > + > + if (!__atomic_compare_exchange_n (&v, &expected, desired, STRONG , __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST)) > + abort (); > + if (expected != 0) > + abort (); > + if (v != max) > + abort (); > + > + /* Now test the generic version. */ > + > + v = 0; > + > + if (!__atomic_compare_exchange (&v, &expected, &max, STRONG, __ATOMIC_RELAXED, __ATOMIC_RELAXED)) > + abort (); > + if (expected != 0) > + abort (); > + > + if (__atomic_compare_exchange (&v, &expected, &zero, STRONG , __ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) > + abort (); > + if (expected != max) > + abort (); > + > + if (!__atomic_compare_exchange (&v, &expected, &zero, STRONG , __ATOMIC_RELEASE, __ATOMIC_ACQUIRE)) > + abort (); > + if (expected != max) > + abort (); > + if (v != 0) > + abort (); > + > + if (__atomic_compare_exchange (&v, &expected, &desired, WEAK, __ATOMIC_ACQ_REL, __ATOMIC_ACQUIRE)) > + abort (); > + if (expected != 0) > + abort (); > + > + if (!__atomic_compare_exchange (&v, &expected, &desired, STRONG , __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST)) > + abort (); > + if (expected != 0) > + abort (); > + if (v != max) > + abort (); > + > + return 0; > +} > diff --git a/gcc/testsuite/gcc.target/riscv/inline-atomics-6.c b/gcc/testsuite/gcc.target/riscv/inline-atomics-6.c > new file mode 100644 > index 00000000000..8fee8c44811 > --- /dev/null > +++ b/gcc/testsuite/gcc.target/riscv/inline-atomics-6.c > @@ -0,0 +1,87 @@ > +/* Test __atomic routines for existence and proper execution on 2 byte > + values with each valid memory model. */ > +/* Duplicate logic as libatomic/testsuite/libatomic.c/atomic-compare-exchange-2.c */ > +/* { dg-do run } */ > +/* { dg-options "-minline-atomics" } */ > + > +/* Test the execution of the __atomic_compare_exchange_n builtin for a short. */ > + > +extern void abort(void); > + > +short v = 0; > +short expected = 0; > +short max = ~0; > +short desired = ~0; > +short zero = 0; > + > +#define STRONG 0 > +#define WEAK 1 > + > +int > +main () > +{ > + > + if (!__atomic_compare_exchange_n (&v, &expected, max, STRONG , __ATOMIC_RELAXED, __ATOMIC_RELAXED)) > + abort (); > + if (expected != 0) > + abort (); > + > + if (__atomic_compare_exchange_n (&v, &expected, 0, STRONG , __ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) > + abort (); > + if (expected != max) > + abort (); > + > + if (!__atomic_compare_exchange_n (&v, &expected, 0, STRONG , __ATOMIC_RELEASE, __ATOMIC_ACQUIRE)) > + abort (); > + if (expected != max) > + abort (); > + if (v != 0) > + abort (); > + > + if (__atomic_compare_exchange_n (&v, &expected, desired, WEAK, __ATOMIC_ACQ_REL, __ATOMIC_ACQUIRE)) > + abort (); > + if (expected != 0) > + abort (); > + > + if (!__atomic_compare_exchange_n (&v, &expected, desired, STRONG , __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST)) > + abort (); > + if (expected != 0) > + abort (); > + if (v != max) > + abort (); > + > + /* Now test the generic version. */ > + > + v = 0; > + > + if (!__atomic_compare_exchange (&v, &expected, &max, STRONG, __ATOMIC_RELAXED, __ATOMIC_RELAXED)) > + abort (); > + if (expected != 0) > + abort (); > + > + if (__atomic_compare_exchange (&v, &expected, &zero, STRONG , __ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) > + abort (); > + if (expected != max) > + abort (); > + > + if (!__atomic_compare_exchange (&v, &expected, &zero, STRONG , __ATOMIC_RELEASE, __ATOMIC_ACQUIRE)) > + abort (); > + if (expected != max) > + abort (); > + if (v != 0) > + abort (); > + > + if (__atomic_compare_exchange (&v, &expected, &desired, WEAK, __ATOMIC_ACQ_REL, __ATOMIC_ACQUIRE)) > + abort (); > + if (expected != 0) > + abort (); > + > + if (!__atomic_compare_exchange (&v, &expected, &desired, STRONG , __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST)) > + abort (); > + if (expected != 0) > + abort (); > + if (v != max) > + abort (); > + > + return 0; > +} > diff --git a/gcc/testsuite/gcc.target/riscv/inline-atomics-7.c b/gcc/testsuite/gcc.target/riscv/inline-atomics-7.c > new file mode 100644 > index 00000000000..24c344c0ce3 > --- /dev/null > +++ b/gcc/testsuite/gcc.target/riscv/inline-atomics-7.c > @@ -0,0 +1,69 @@ > +/* Test __atomic routines for existence and proper execution on 1 byte > + values with each valid memory model. */ > +/* Duplicate logic as libatomic/testsuite/libatomic.c/atomic-exchange-1.c */ > +/* { dg-do run } */ > +/* { dg-options "-minline-atomics" } */ > + > +/* Test the execution of the __atomic_exchange_n builtin for a char. */ > + > +extern void abort(void); > + > +char v, count, ret; > + > +int > +main () > +{ > + v = 0; > + count = 0; > + > + if (__atomic_exchange_n (&v, count + 1, __ATOMIC_RELAXED) != count) > + abort (); > + count++; > + > + if (__atomic_exchange_n (&v, count + 1, __ATOMIC_ACQUIRE) != count) > + abort (); > + count++; > + > + if (__atomic_exchange_n (&v, count + 1, __ATOMIC_RELEASE) != count) > + abort (); > + count++; > + > + if (__atomic_exchange_n (&v, count + 1, __ATOMIC_ACQ_REL) != count) > + abort (); > + count++; > + > + if (__atomic_exchange_n (&v, count + 1, __ATOMIC_SEQ_CST) != count) > + abort (); > + count++; > + > + /* Now test the generic version. */ > + > + count++; > + > + __atomic_exchange (&v, &count, &ret, __ATOMIC_RELAXED); > + if (ret != count - 1 || v != count) > + abort (); > + count++; > + > + __atomic_exchange (&v, &count, &ret, __ATOMIC_ACQUIRE); > + if (ret != count - 1 || v != count) > + abort (); > + count++; > + > + __atomic_exchange (&v, &count, &ret, __ATOMIC_RELEASE); > + if (ret != count - 1 || v != count) > + abort (); > + count++; > + > + __atomic_exchange (&v, &count, &ret, __ATOMIC_ACQ_REL); > + if (ret != count - 1 || v != count) > + abort (); > + count++; > + > + __atomic_exchange (&v, &count, &ret, __ATOMIC_SEQ_CST); > + if (ret != count - 1 || v != count) > + abort (); > + count++; > + > + return 0; > +} > diff --git a/gcc/testsuite/gcc.target/riscv/inline-atomics-8.c b/gcc/testsuite/gcc.target/riscv/inline-atomics-8.c > new file mode 100644 > index 00000000000..edc212df04e > --- /dev/null > +++ b/gcc/testsuite/gcc.target/riscv/inline-atomics-8.c > @@ -0,0 +1,69 @@ > +/* Test __atomic routines for existence and proper execution on 2 byte > + values with each valid memory model. */ > +/* Duplicate logic as libatomic/testsuite/libatomic.c/atomic-exchange-2.c */ > +/* { dg-do run } */ > +/* { dg-options "-minline-atomics" } */ > + > +/* Test the execution of the __atomic_X builtin for a short. */ > + > +extern void abort(void); > + > +short v, count, ret; > + > +int > +main () > +{ > + v = 0; > + count = 0; > + > + if (__atomic_exchange_n (&v, count + 1, __ATOMIC_RELAXED) != count) > + abort (); > + count++; > + > + if (__atomic_exchange_n (&v, count + 1, __ATOMIC_ACQUIRE) != count) > + abort (); > + count++; > + > + if (__atomic_exchange_n (&v, count + 1, __ATOMIC_RELEASE) != count) > + abort (); > + count++; > + > + if (__atomic_exchange_n (&v, count + 1, __ATOMIC_ACQ_REL) != count) > + abort (); > + count++; > + > + if (__atomic_exchange_n (&v, count + 1, __ATOMIC_SEQ_CST) != count) > + abort (); > + count++; > + > + /* Now test the generic version. */ > + > + count++; > + > + __atomic_exchange (&v, &count, &ret, __ATOMIC_RELAXED); > + if (ret != count - 1 || v != count) > + abort (); > + count++; > + > + __atomic_exchange (&v, &count, &ret, __ATOMIC_ACQUIRE); > + if (ret != count - 1 || v != count) > + abort (); > + count++; > + > + __atomic_exchange (&v, &count, &ret, __ATOMIC_RELEASE); > + if (ret != count - 1 || v != count) > + abort (); > + count++; > + > + __atomic_exchange (&v, &count, &ret, __ATOMIC_ACQ_REL); > + if (ret != count - 1 || v != count) > + abort (); > + count++; > + > + __atomic_exchange (&v, &count, &ret, __ATOMIC_SEQ_CST); > + if (ret != count - 1 || v != count) > + abort (); > + count++; > + > + return 0; > +} > diff --git a/libgcc/config/riscv/atomic.c b/libgcc/config/riscv/atomic.c > index 69f53623509..573d163ea04 100644 > --- a/libgcc/config/riscv/atomic.c > +++ b/libgcc/config/riscv/atomic.c > @@ -30,6 +30,8 @@ see the files COPYING3 and COPYING.RUNTIME respectively. If not, see > #define INVERT "not %[tmp1], %[tmp1]\n\t" > #define DONT_INVERT "" > > +/* Logic duplicated in gcc/gcc/config/riscv/sync.md for use when inlining is enabled */ > + > #define GENERATE_FETCH_AND_OP(type, size, opname, insn, invert, cop) \ > type __sync_fetch_and_ ## opname ## _ ## size (type *p, type v) \ > { \