From: Palmer Dabbelt <palmer@dabbelt.com>
To: gcc-patches@gcc.gnu.org
Cc: xc-tan@outlook.com, gcc-patches@gcc.gnu.org,
fantasquex@gmail.com, Andrew Waterman <andrew@sifive.com>
Subject: Re: [PATCH v2] RISC-V: Libitm add RISC-V support.
Date: Thu, 27 Oct 2022 17:44:55 -0700 (PDT) [thread overview]
Message-ID: <mhng-9366d3ea-bf38-491e-8318-5fc74e40fd08@palmer-ri-x1c9a> (raw)
In-Reply-To: <5ac3d38a-27c1-cc4a-dc4b-9d82e109503a@gmail.com>
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 2269 bytes --]
On Thu, 27 Oct 2022 16:05:19 PDT (-0700), gcc-patches@gcc.gnu.org wrote:
>
> On 10/27/22 06:49, Xiongchuan Tan via Gcc-patches wrote:
>> libitm/ChangeLog:
>>
>> * configure.tgt: Add riscv support.
>> * config/riscv/asm.h: New file.
>> * config/riscv/sjlj.S: New file.
>> * config/riscv/target.h: New file.
>> ---
>> v2: Change HW_CACHELINE_SIZE to 64 (in accordance with the RVA profiles, see
>> https://github.com/riscv/riscv-profiles/blob/main/profiles.adoc)
>>
>> libitm/config/riscv/asm.h | 52 +++++++++++++
>> libitm/config/riscv/sjlj.S | 144 +++++++++++++++++++++++++++++++++++
>> libitm/config/riscv/target.h | 50 ++++++++++++
>> libitm/configure.tgt | 2 +
>> 4 files changed, 248 insertions(+)
>> create mode 100644 libitm/config/riscv/asm.h
>> create mode 100644 libitm/config/riscv/sjlj.S
>> create mode 100644 libitm/config/riscv/target.h
>
> Not objecting or even reviewing.... But hasn't transactional memory
> largely fallen out of favor these days? Intel has pulled it and I think
> IBM did as well. Should we be investing in extending libitm at all?
I think we didn't get the memo: https://github.com/riscv/riscv-isa-manual/pull/906
The code looks fine to me, so
Reviewed-by: Palmer Dabbelt <palmer@rivosinc.com>
Acked-by: Palmer Dabbelt <palmer@rivosinc.com>
though I don't have an opinion on whether libitm should be taking ports
to new targets, I'd never even heard of it before.
Some minor comments:
> +_ITM_beginTransaction:
> + cfi_startproc
> + mv a1, sp
> + addi sp, sp, -(14*SZ_GPR+12*SZ_FPR)
> + cfi_adjust_cfa_offset(14*SZ_GPR+12*SZ_FPR)
Many of the ABIs require 16-byte stack alignment.
Also: it doesn't hurt anything to use the extra stack, but we only
stricly need space for the FPRs if we're going to bother saving them.
> +/* ??? The size of one line in hardware caches (in bytes). */
> +#define HW_CACHELINE_SIZE 64
Maybe we should have a placeholder libc/vdso routine for the cache line
size? The specs are sort of just a suggestion for that sort of thing.
> +static inline void
> +cpu_relax (void)
> +{
> + __asm__ volatile ("" : : : "memory");
> +}
We have Zihintpause now, but that's a pretty minor optimization.
next prev parent reply other threads:[~2022-10-28 0:44 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-10-27 12:49 Xiongchuan Tan
2022-10-27 23:05 ` Jeff Law
2022-10-28 0:44 ` Palmer Dabbelt [this message]
2022-10-28 4:23 ` Xi Ruoyao
2022-10-29 3:18 ` Jeff Law
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=mhng-9366d3ea-bf38-491e-8318-5fc74e40fd08@palmer-ri-x1c9a \
--to=palmer@dabbelt.com \
--cc=andrew@sifive.com \
--cc=fantasquex@gmail.com \
--cc=gcc-patches@gcc.gnu.org \
--cc=xc-tan@outlook.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).