From: Thomas Schwinge <thomas@codesourcery.com>
To: Tom de Vries <tdevries@suse.de>, <gcc-patches@gcc.gnu.org>
Subject: nvptx: Make 'nvptx_uniform_warp_check' fit for non-full-warp execution (was: [committed][nvptx] Add uniform_warp_check insn)
Date: Thu, 15 Dec 2022 19:27:08 +0100 [thread overview]
Message-ID: <87a63ofrpf.fsf@euler.schwinge.homeip.net> (raw)
In-Reply-To: <20220201183125.GA4286@delia.home>
[-- Attachment #1: Type: text/plain, Size: 13594 bytes --]
Hi Tom!
First "a bit" of context; skip to "the proposed patch" if you'd like to
see just that.
On 2022-02-01T19:31:27+0100, Tom de Vries via Gcc-patches <gcc-patches@gcc.gnu.org> wrote:
> On a GT 1030, with driver version 470.94 and -mptx=3.1 I run into:
> ...
> FAIL: libgomp.oacc-c/../libgomp.oacc-c-c++-common/parallel-dims.c \
> -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none \
> -O2 execution test
> ...
> which minimizes to the same test-case as listed in commit "[nvptx]
> Update default ptx isa to 6.3".
>
> The problem is again that the first diverging branch is not handled as such in
> SASS, which causes problems with a subsequent shfl insn, but given that we
> have -mptx=3.1 we can't use the bar.warp.sync insn.
>
> Given that the default is now -mptx=6.3, and consequently -mptx=3.1 is of a
> lesser importance, implement the next best thing: abort when detecting
> non-convergence using this insn:
> ...
> { .reg.b32 act;
> vote.ballot.b32 act,1;
> .reg.pred uni;
> setp.eq.b32 uni,act,0xffffffff;
> @ !uni trap;
> @ !uni exit;
> }
> ...
>
> Interestingly, the effect of this is that rather than aborting, the test-case
> now passes.
(I suppose this "nudges" the PTX -> SASS compiler into the right
direction?)
For avoidance of doubt, my following discussion is not about the specific
(first) use of 'nvptx_uniform_warp_check' introduced here in this
commit r12-6971-gf32f74c2e8cef5fe37af6d4e8d7e8f6b4c8ae9a8
"[nvptx] Add uniform_warp_check insn":
> --- a/gcc/config/nvptx/nvptx.cc
> +++ b/gcc/config/nvptx/nvptx.cc
> @@ -4631,15 +4631,29 @@ nvptx_single (unsigned mask, basic_block from, basic_block to)
> if (tail_branch)
> {
> label_insn = emit_label_before (label, before);
> - if (TARGET_PTX_6_0 && mode == GOMP_DIM_VECTOR)
> - warp_sync = emit_insn_after (gen_nvptx_warpsync (), label_insn);
> + if (mode == GOMP_DIM_VECTOR)
> + {
> + if (TARGET_PTX_6_0)
> + warp_sync = emit_insn_after (gen_nvptx_warpsync (),
> + label_insn);
> + else
> + warp_sync = emit_insn_after (gen_nvptx_uniform_warp_check (),
> + label_insn);
> + }
> before = label_insn;
> }
> else
> {
> label_insn = emit_label_after (label, tail);
> - if (TARGET_PTX_6_0 && mode == GOMP_DIM_VECTOR)
> - warp_sync = emit_insn_after (gen_nvptx_warpsync (), label_insn);
> + if (mode == GOMP_DIM_VECTOR)
> + {
> + if (TARGET_PTX_6_0)
> + warp_sync = emit_insn_after (gen_nvptx_warpsync (),
> + label_insn);
> + else
> + warp_sync = emit_insn_after (gen_nvptx_uniform_warp_check (),
> + label_insn);
> + }
> if ((mode == GOMP_DIM_VECTOR || mode == GOMP_DIM_WORKER)
> && CALL_P (tail) && find_reg_note (tail, REG_NORETURN, NULL))
> emit_insn_after (gen_exit (), label_insn);
Later, other uses have been added, for example in OpenMP '-muniform-simt'
code generation.
My following discussion is about the implementation of
'nvptx_uniform_warp_check', originally introduced as follows:
> --- a/gcc/config/nvptx/nvptx.md
> +++ b/gcc/config/nvptx/nvptx.md
> @@ -57,6 +57,7 @@ (define_c_enum "unspecv" [
> UNSPECV_XCHG
> UNSPECV_BARSYNC
> UNSPECV_WARPSYNC
> + UNSPECV_UNIFORM_WARP_CHECK
> UNSPECV_MEMBAR
> UNSPECV_MEMBAR_CTA
> UNSPECV_MEMBAR_GL
> @@ -1985,6 +1986,23 @@ (define_insn "nvptx_warpsync"
> "\\tbar.warp.sync\\t0xffffffff;"
> [(set_attr "predicable" "false")])
>
> +(define_insn "nvptx_uniform_warp_check"
> + [(unspec_volatile [(const_int 0)] UNSPECV_UNIFORM_WARP_CHECK)]
> + ""
> + {
> + output_asm_insn ("{", NULL);
> + output_asm_insn ("\\t" ".reg.b32" "\\t" "act;", NULL);
> + output_asm_insn ("\\t" "vote.ballot.b32" "\\t" "act,1;", NULL);
> + output_asm_insn ("\\t" ".reg.pred" "\\t" "uni;", NULL);
> + output_asm_insn ("\\t" "setp.eq.b32" "\\t" "uni,act,0xffffffff;",
> + NULL);
> + output_asm_insn ("@ !uni\\t" "trap;", NULL);
> + output_asm_insn ("@ !uni\\t" "exit;", NULL);
> + output_asm_insn ("}", NULL);
> + return "";
> + }
> + [(set_attr "predicable" "false")])
Later adjusted, but the fundamental idea is still the same.
Via temporarily disabling 'nvptx_uniform_warp_check':
(define_insn "nvptx_uniform_warp_check"
[(unspec_volatile [(const_int 0)] UNSPECV_UNIFORM_WARP_CHECK)]
""
{
+#if 0
const char *insns[] = {
"{",
"\\t" ".reg.b32" "\\t" "%%r_act;",
"%.\\t" "vote.ballot.b32" "\\t" "%%r_act,1;",
"\\t" ".reg.pred" "\\t" "%%r_do_abort;",
"\\t" "mov.pred" "\\t" "%%r_do_abort,0;",
"%.\\t" "setp.ne.b32" "\\t" "%%r_do_abort,%%r_act,"
"0xffffffff;",
"@ %%r_do_abort\\t" "trap;",
"@ %%r_do_abort\\t" "exit;",
"}",
NULL
};
for (const char **p = &insns[0]; *p != NULL; p++)
output_asm_insn (*p, NULL);
+#endif
return "";
})
..., I've first tested/confirmed the problem that it was originally
solving. Testing with:
$ nvidia-smi
[...]
| NVIDIA-SMI 515.65.01 Driver Version: 515.65.01 CUDA Version: 11.7 |
[...]
| 0 Quadro P1000 [...]
For 'check-gcc' with '--target_board=nvptx-none-run/-mptx=3.1 nvptx.exp',
this (obviously) regresses:
PASS: gcc.target/nvptx/uniform-simt-2.c (test for excess errors)
PASS: gcc.target/nvptx/uniform-simt-2.c scan-assembler-times @%r[0-9]*\tatom.global.cas 1
PASS: gcc.target/nvptx/uniform-simt-2.c scan-assembler-times shfl.idx.b32 1
[-PASS:-]{+FAIL:+} gcc.target/nvptx/uniform-simt-2.c scan-assembler-times vote.ballot.b32 1
For 'check-target-libgomp' with
'--target_board=unix/-foffload-options=nvptx-none=-mptx=3.1', there are
not obvious regressions for any OpenMP test cases.
For example, for the test case 'libgomp.c/pr104783-2.c' of
commit a624388b9546b066250be8baa118b7d50c403c25
"[nvptx] Add warp sync at simt exit", 'nvptx_uniform_warp_check' is not
applicable per se: this is about an issue with sm_70+ Independent Thread
Scheduling, which is applicable only 'if (TARGET_PTX_6_0)', and in that
case, we emit 'nvptx_warpsync', not 'nvptx_uniform_warp_check'.
For other OpenMP test cases (which I've not analyzed in detail), we're
maybe simply lucky that 'nvptx_uniform_warp_check' is not relevant
(... at least in this testing configuration). (For avoidance of doubt, I
have no reason to believe that there's any problem with the
PR104783 "[nvptx, openmp] Hang/abort with atomic update in simd construct",
PR104916 "[nvptx] Handle Independent Thread Scheduling for sm_70+ with -muniform-simt",
"[nvptx] Use nvptx_warpsync / nvptx_uniform_warp_check for -muniform-simt",
or other such code changes; mentioning this just for completeness.)
..., but as regards OpenACC test cases, this still regresses several:
[-PASS:-]{+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/parallel-dims.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O2 execution test
(That's the one cited in the commit log of
commit r12-6971-gf32f74c2e8cef5fe37af6d4e8d7e8f6b4c8ae9a8
"[nvptx] Add uniform_warp_check insn".)
[-PASS:-]{+WARNING: program timed out.+}
{+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-10.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test
[-PASS:-]{+WARNING: program timed out.+}
{+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-10.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O2 execution test
[-PASS:-]{+WARNING: program timed out.+}
{+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-4.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test[-PASS: libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-4.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 output pattern test-]
[-PASS:-]{+WARNING: program timed out.+}
{+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-5.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test[-PASS: libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-5.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 output pattern test-]
[-PASS:-]{+WARNING: program timed out.+}
{+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-6.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test[-PASS: libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-6.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 output pattern test-]
[-PASS:-]{+WARNING: program timed out.+}
{+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-7.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test[-PASS: libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-7.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 output pattern test-]
[-PASS:-]{+WARNING: program timed out.+}
{+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-64-1.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test
[-PASS:-]{+WARNING: program timed out.+}
{+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-64-3.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test
[-PASS:-]{+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/vred2d-128.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test
Same for C++.
[-PASS:-]{+FAIL:+} libgomp.oacc-c++/ref-1.C -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O2 execution test
[-PASS:-]{+WARNING: program timed out.+}
{+FAIL:+} libgomp.oacc-fortran/gemm-2.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test
[-PASS:-]{+WARNING: program timed out.+}
{+FAIL:+} libgomp.oacc-fortran/gemm.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test
[-PASS:-]{+FAIL:+} libgomp.oacc-fortran/parallel-dims.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O2 execution test
[-PASS:-]{+FAIL:+} libgomp.oacc-fortran/parallel-dims.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O3 -fomit-frame-pointer -funroll-loops -fpeel-loops -ftracer -finline-functions execution test
[-PASS:-]{+FAIL:+} libgomp.oacc-fortran/parallel-dims.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O3 -g execution test
[-PASS:-]{+FAIL:+} libgomp.oacc-fortran/parallel-dims.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -Os execution test
[-PASS:-]{+FAIL:+} libgomp.oacc-fortran/routine-7.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test
[-PASS:-]{+FAIL:+} libgomp.oacc-fortran/routine-7.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O1 execution test
[-PASS:-]{+FAIL:+} libgomp.oacc-fortran/routine-7.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O2 execution test
[-PASS:-]{+FAIL:+} libgomp.oacc-fortran/routine-7.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O3 -fomit-frame-pointer -funroll-loops -fpeel-loops -ftracer -finline-functions execution test
[-PASS:-]{+FAIL:+} libgomp.oacc-fortran/routine-7.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O3 -g execution test
[-PASS:-]{+FAIL:+} libgomp.oacc-fortran/routine-7.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -Os execution test
So that's "good": plenty of evidence that 'nvptx_uniform_warp_check' is
necessary and working.
Now, "the proposed patch". I'd like to make 'nvptx_uniform_warp_check'
fit for non-full-warp execution. For example, to be able to execute such
code in single-threaded 'cuLaunchKernel' for execution of global
constructors/destructors, where those may, for example, call into nvptx
target libraries compiled with '-mgomp' (thus, '-muniform-simt').
OK to push (after proper testing, and with TODO markers adjusted/removed)
the attached
"nvptx: Make 'nvptx_uniform_warp_check' fit for non-full-warp execution"?
Grüße
Thomas
-----------------
Siemens Electronic Design Automation GmbH; Anschrift: Arnulfstraße 201, 80634 München; Gesellschaft mit beschränkter Haftung; Geschäftsführer: Thomas Heurung, Frank Thürauf; Sitz der Gesellschaft: München; Registergericht München, HRB 106955
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: 0001-nvptx-Make-nvptx_uniform_warp_check-fit-for-non-full.patch --]
[-- Type: text/x-diff, Size: 5808 bytes --]
From 1d8df3b793fc43dd23b2679d4a31b761e6ac799c Mon Sep 17 00:00:00 2001
From: Thomas Schwinge <thomas@codesourcery.com>
Date: Mon, 12 Dec 2022 22:05:37 +0100
Subject: [PATCH] nvptx: Make 'nvptx_uniform_warp_check' fit for non-full-warp
execution
For example, this allows for '-muniform-simt' code to be executed
single-threaded, which currently fails (device-side 'trap'), as the 0xffffffff
mask isn't correct if not all 32 threads of a warp are active. The same
issue/fix, I suppose but have not verified, would apply if we were to allow for
OpenACC 'vector_length' smaller than 32, for example for OpenACC 'serial'.
We use 'nvptx_uniform_warp_check' only for PTX ISA version less than 6.0.
Otherwise we're using 'nvptx_warpsync', which emits 'bar.warp.sync 0xffffffff',
which evidently appears to do the right thing. (I've tested '-muniform-simt'
code executing single-threaded.)
gcc/
* config/nvptx/nvptx.md (nvptx_uniform_warp_check): Make fit for
non-full-warp execution.
gcc/testsuite/
* gcc.target/nvptx/nvptx.exp
(check_effective_target_default_ptx_isa_version_at_least_6_0):
New.
* gcc.target/nvptx/uniform-simt-5.c: New.
libgomp/
* plugin/plugin-nvptx.c (nvptx_exec): Assert what we know about
'blockDimX'.
---
gcc/config/nvptx/nvptx.md | 16 ++++++++++-
gcc/testsuite/gcc.target/nvptx/nvptx.exp | 5 ++++
.../gcc.target/nvptx/uniform-simt-5.c | 28 +++++++++++++++++++
libgomp/plugin/plugin-nvptx.c | 3 ++
4 files changed, 51 insertions(+), 1 deletion(-)
create mode 100644 gcc/testsuite/gcc.target/nvptx/uniform-simt-5.c
diff --git a/gcc/config/nvptx/nvptx.md b/gcc/config/nvptx/nvptx.md
index 8ed685027b5f..8a1bb630a0a7 100644
--- a/gcc/config/nvptx/nvptx.md
+++ b/gcc/config/nvptx/nvptx.md
@@ -2282,10 +2282,24 @@
"{",
"\\t" ".reg.b32" "\\t" "%%r_act;",
"%.\\t" "vote.ballot.b32" "\\t" "%%r_act,1;",
+ /* For '%r_exp', we essentially need 'activemask.b32', but that is "Introduced in PTX ISA version 6.2", and this code here is used only 'if (!TARGET_PTX_6_0)'. Thus, emulate it.
+ TODO Is that actually correct? Wouldn't 'activemask.b32' rather replace our 'vote.ballot.b32' given that it registers the *currently active threads*? */
+ /* Compute the "membermask" of all threads of the warp that are expected to be converged here.
+ For OpenACC, '%ntid.x' is 'vector_length', which per 'nvptx_goacc_validate_dims' always is a multiple of 32.
+ For OpenMP, '%ntid.x' always is 32.
+ Thus, this is typically 0xffffffff, but additionally always for the case that not all 32 threads of the warp have been launched.
+ This assume that lane IDs are assigned in ascending order. */
+ //TODO Can we rely on '1 << 32 == 0', and '0 - 1 = 0xffffffff'?
+ //TODO https://developer.nvidia.com/blog/using-cuda-warp-level-primitives/
+ //TODO https://stackoverflow.com/questions/54055195/activemask-vs-ballot-sync
+ "\\t" ".reg.b32" "\\t" "%%r_exp;",
+ "%.\\t" "mov.b32" "\\t" "%%r_exp, %%ntid.x;",
+ "%.\\t" "shl.b32" "\\t" "%%r_exp, 1, %%r_exp;",
+ "%.\\t" "sub.u32" "\\t" "%%r_exp, %%r_exp, 1;",
"\\t" ".reg.pred" "\\t" "%%r_do_abort;",
"\\t" "mov.pred" "\\t" "%%r_do_abort,0;",
"%.\\t" "setp.ne.b32" "\\t" "%%r_do_abort,%%r_act,"
- "0xffffffff;",
+ "%%r_exp;",
"@ %%r_do_abort\\t" "trap;",
"@ %%r_do_abort\\t" "exit;",
"}",
diff --git a/gcc/testsuite/gcc.target/nvptx/nvptx.exp b/gcc/testsuite/gcc.target/nvptx/nvptx.exp
index e9622ae7aaa8..17e03daeb7e0 100644
--- a/gcc/testsuite/gcc.target/nvptx/nvptx.exp
+++ b/gcc/testsuite/gcc.target/nvptx/nvptx.exp
@@ -49,6 +49,11 @@ proc check_effective_target_default_ptx_isa_version_at_least { major minor } {
return $res
}
+# Return 1 if code by default compiles for at least PTX ISA version 6.0.
+proc check_effective_target_default_ptx_isa_version_at_least_6_0 { } {
+ return [check_effective_target_default_ptx_isa_version_at_least 6 0]
+}
+
# Return 1 if code with PTX ISA version major.minor or higher can be run.
proc check_effective_target_runtime_ptx_isa_version_at_least { major minor } {
set name runtime_ptx_isa_version_${major}_${minor}
diff --git a/gcc/testsuite/gcc.target/nvptx/uniform-simt-5.c b/gcc/testsuite/gcc.target/nvptx/uniform-simt-5.c
new file mode 100644
index 000000000000..b2f78198db21
--- /dev/null
+++ b/gcc/testsuite/gcc.target/nvptx/uniform-simt-5.c
@@ -0,0 +1,28 @@
+/* Verify that '-muniform-simt' code may be executed single-threaded.
+
+ { dg-do run }
+ { dg-options {-save-temps -O2 -muniform-simt} } */
+
+enum memmodel
+{
+ MEMMODEL_RELAXED = 0
+};
+
+unsigned long long int v64;
+unsigned long long int *p64 = &v64;
+
+int
+main()
+{
+ /* Trigger uniform-SIMT processing. */
+ __atomic_fetch_add (p64, v64, MEMMODEL_RELAXED);
+
+ return 0;
+}
+
+/* Per 'omp_simt_exit':
+ - 'nvptx_warpsync'
+ { dg-final { scan-assembler-times {bar\.warp\.sync\t0xffffffff;} 1 { target default_ptx_isa_version_at_least_6_0 } } }
+ - 'nvptx_uniform_warp_check'
+ { dg-final { scan-assembler-times {vote\.ballot\.b32\t%r_act,1;} 1 { target { ! default_ptx_isa_version_at_least_6_0 } } } }
+*/
diff --git a/libgomp/plugin/plugin-nvptx.c b/libgomp/plugin/plugin-nvptx.c
index 4f4c25a90baf..5f8aed56c8b1 100644
--- a/libgomp/plugin/plugin-nvptx.c
+++ b/libgomp/plugin/plugin-nvptx.c
@@ -984,6 +984,9 @@ nvptx_exec (void (*fn), size_t mapnum, void **hostaddrs, void **devaddrs,
api_info);
}
+ /* Per 'nvptx_goacc_validate_dims'. */
+ assert (dims[GOMP_DIM_VECTOR] % warp_size == 0);
+
kargs[0] = &dp;
CUDA_CALL_ASSERT (cuLaunchKernel, function,
dims[GOMP_DIM_GANG], 1, 1,
--
2.35.1
next prev parent reply other threads:[~2022-12-15 18:27 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-02-01 18:31 [committed][nvptx] Add uniform_warp_check insn Tom de Vries
2022-09-14 9:41 ` Thomas Schwinge
2022-09-14 9:56 ` Tom de Vries
2022-12-15 18:27 ` Thomas Schwinge [this message]
2023-01-11 11:37 ` [PING] nvptx: Make 'nvptx_uniform_warp_check' fit for non-full-warp execution (was: [committed][nvptx] Add uniform_warp_check insn) Thomas Schwinge
2023-01-20 20:23 ` [og12] nvptx: Make 'nvptx_uniform_warp_check' fit for non-full-warp execution Thomas Schwinge
2024-06-04 19:53 ` nvptx: Make 'nvptx_uniform_warp_check' fit for non-full-warp execution, via 'vote.all.pred' (was: nvptx: Make 'nvptx_uniform_warp_check' fit for non-full-warp execution (was: [committed][nvptx] Add uniform_warp_check insn)) Thomas Schwinge
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87a63ofrpf.fsf@euler.schwinge.homeip.net \
--to=thomas@codesourcery.com \
--cc=gcc-patches@gcc.gnu.org \
--cc=tdevries@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).