Hi Tom! First "a bit" of context; skip to "the proposed patch" if you'd like to see just that. On 2022-02-01T19:31:27+0100, Tom de Vries via Gcc-patches wrote: > On a GT 1030, with driver version 470.94 and -mptx=3.1 I run into: > ... > FAIL: libgomp.oacc-c/../libgomp.oacc-c-c++-common/parallel-dims.c \ > -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none \ > -O2 execution test > ... > which minimizes to the same test-case as listed in commit "[nvptx] > Update default ptx isa to 6.3". > > The problem is again that the first diverging branch is not handled as such in > SASS, which causes problems with a subsequent shfl insn, but given that we > have -mptx=3.1 we can't use the bar.warp.sync insn. > > Given that the default is now -mptx=6.3, and consequently -mptx=3.1 is of a > lesser importance, implement the next best thing: abort when detecting > non-convergence using this insn: > ... > { .reg.b32 act; > vote.ballot.b32 act,1; > .reg.pred uni; > setp.eq.b32 uni,act,0xffffffff; > @ !uni trap; > @ !uni exit; > } > ... > > Interestingly, the effect of this is that rather than aborting, the test-case > now passes. (I suppose this "nudges" the PTX -> SASS compiler into the right direction?) For avoidance of doubt, my following discussion is not about the specific (first) use of 'nvptx_uniform_warp_check' introduced here in this commit r12-6971-gf32f74c2e8cef5fe37af6d4e8d7e8f6b4c8ae9a8 "[nvptx] Add uniform_warp_check insn": > --- a/gcc/config/nvptx/nvptx.cc > +++ b/gcc/config/nvptx/nvptx.cc > @@ -4631,15 +4631,29 @@ nvptx_single (unsigned mask, basic_block from, basic_block to) > if (tail_branch) > { > label_insn = emit_label_before (label, before); > - if (TARGET_PTX_6_0 && mode == GOMP_DIM_VECTOR) > - warp_sync = emit_insn_after (gen_nvptx_warpsync (), label_insn); > + if (mode == GOMP_DIM_VECTOR) > + { > + if (TARGET_PTX_6_0) > + warp_sync = emit_insn_after (gen_nvptx_warpsync (), > + label_insn); > + else > + warp_sync = emit_insn_after (gen_nvptx_uniform_warp_check (), > + label_insn); > + } > before = label_insn; > } > else > { > label_insn = emit_label_after (label, tail); > - if (TARGET_PTX_6_0 && mode == GOMP_DIM_VECTOR) > - warp_sync = emit_insn_after (gen_nvptx_warpsync (), label_insn); > + if (mode == GOMP_DIM_VECTOR) > + { > + if (TARGET_PTX_6_0) > + warp_sync = emit_insn_after (gen_nvptx_warpsync (), > + label_insn); > + else > + warp_sync = emit_insn_after (gen_nvptx_uniform_warp_check (), > + label_insn); > + } > if ((mode == GOMP_DIM_VECTOR || mode == GOMP_DIM_WORKER) > && CALL_P (tail) && find_reg_note (tail, REG_NORETURN, NULL)) > emit_insn_after (gen_exit (), label_insn); Later, other uses have been added, for example in OpenMP '-muniform-simt' code generation. My following discussion is about the implementation of 'nvptx_uniform_warp_check', originally introduced as follows: > --- a/gcc/config/nvptx/nvptx.md > +++ b/gcc/config/nvptx/nvptx.md > @@ -57,6 +57,7 @@ (define_c_enum "unspecv" [ > UNSPECV_XCHG > UNSPECV_BARSYNC > UNSPECV_WARPSYNC > + UNSPECV_UNIFORM_WARP_CHECK > UNSPECV_MEMBAR > UNSPECV_MEMBAR_CTA > UNSPECV_MEMBAR_GL > @@ -1985,6 +1986,23 @@ (define_insn "nvptx_warpsync" > "\\tbar.warp.sync\\t0xffffffff;" > [(set_attr "predicable" "false")]) > > +(define_insn "nvptx_uniform_warp_check" > + [(unspec_volatile [(const_int 0)] UNSPECV_UNIFORM_WARP_CHECK)] > + "" > + { > + output_asm_insn ("{", NULL); > + output_asm_insn ("\\t" ".reg.b32" "\\t" "act;", NULL); > + output_asm_insn ("\\t" "vote.ballot.b32" "\\t" "act,1;", NULL); > + output_asm_insn ("\\t" ".reg.pred" "\\t" "uni;", NULL); > + output_asm_insn ("\\t" "setp.eq.b32" "\\t" "uni,act,0xffffffff;", > + NULL); > + output_asm_insn ("@ !uni\\t" "trap;", NULL); > + output_asm_insn ("@ !uni\\t" "exit;", NULL); > + output_asm_insn ("}", NULL); > + return ""; > + } > + [(set_attr "predicable" "false")]) Later adjusted, but the fundamental idea is still the same. Via temporarily disabling 'nvptx_uniform_warp_check': (define_insn "nvptx_uniform_warp_check" [(unspec_volatile [(const_int 0)] UNSPECV_UNIFORM_WARP_CHECK)] "" { +#if 0 const char *insns[] = { "{", "\\t" ".reg.b32" "\\t" "%%r_act;", "%.\\t" "vote.ballot.b32" "\\t" "%%r_act,1;", "\\t" ".reg.pred" "\\t" "%%r_do_abort;", "\\t" "mov.pred" "\\t" "%%r_do_abort,0;", "%.\\t" "setp.ne.b32" "\\t" "%%r_do_abort,%%r_act," "0xffffffff;", "@ %%r_do_abort\\t" "trap;", "@ %%r_do_abort\\t" "exit;", "}", NULL }; for (const char **p = &insns[0]; *p != NULL; p++) output_asm_insn (*p, NULL); +#endif return ""; }) ..., I've first tested/confirmed the problem that it was originally solving. Testing with: $ nvidia-smi [...] | NVIDIA-SMI 515.65.01 Driver Version: 515.65.01 CUDA Version: 11.7 | [...] | 0 Quadro P1000 [...] For 'check-gcc' with '--target_board=nvptx-none-run/-mptx=3.1 nvptx.exp', this (obviously) regresses: PASS: gcc.target/nvptx/uniform-simt-2.c (test for excess errors) PASS: gcc.target/nvptx/uniform-simt-2.c scan-assembler-times @%r[0-9]*\tatom.global.cas 1 PASS: gcc.target/nvptx/uniform-simt-2.c scan-assembler-times shfl.idx.b32 1 [-PASS:-]{+FAIL:+} gcc.target/nvptx/uniform-simt-2.c scan-assembler-times vote.ballot.b32 1 For 'check-target-libgomp' with '--target_board=unix/-foffload-options=nvptx-none=-mptx=3.1', there are not obvious regressions for any OpenMP test cases. For example, for the test case 'libgomp.c/pr104783-2.c' of commit a624388b9546b066250be8baa118b7d50c403c25 "[nvptx] Add warp sync at simt exit", 'nvptx_uniform_warp_check' is not applicable per se: this is about an issue with sm_70+ Independent Thread Scheduling, which is applicable only 'if (TARGET_PTX_6_0)', and in that case, we emit 'nvptx_warpsync', not 'nvptx_uniform_warp_check'. For other OpenMP test cases (which I've not analyzed in detail), we're maybe simply lucky that 'nvptx_uniform_warp_check' is not relevant (... at least in this testing configuration). (For avoidance of doubt, I have no reason to believe that there's any problem with the PR104783 "[nvptx, openmp] Hang/abort with atomic update in simd construct", PR104916 "[nvptx] Handle Independent Thread Scheduling for sm_70+ with -muniform-simt", "[nvptx] Use nvptx_warpsync / nvptx_uniform_warp_check for -muniform-simt", or other such code changes; mentioning this just for completeness.) ..., but as regards OpenACC test cases, this still regresses several: [-PASS:-]{+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/parallel-dims.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O2 execution test (That's the one cited in the commit log of commit r12-6971-gf32f74c2e8cef5fe37af6d4e8d7e8f6b4c8ae9a8 "[nvptx] Add uniform_warp_check insn".) [-PASS:-]{+WARNING: program timed out.+} {+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-10.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test [-PASS:-]{+WARNING: program timed out.+} {+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-10.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O2 execution test [-PASS:-]{+WARNING: program timed out.+} {+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-4.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test[-PASS: libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-4.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 output pattern test-] [-PASS:-]{+WARNING: program timed out.+} {+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-5.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test[-PASS: libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-5.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 output pattern test-] [-PASS:-]{+WARNING: program timed out.+} {+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-6.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test[-PASS: libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-6.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 output pattern test-] [-PASS:-]{+WARNING: program timed out.+} {+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-7.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test[-PASS: libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-128-7.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 output pattern test-] [-PASS:-]{+WARNING: program timed out.+} {+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-64-1.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test [-PASS:-]{+WARNING: program timed out.+} {+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/vector-length-64-3.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test [-PASS:-]{+FAIL:+} libgomp.oacc-c/../libgomp.oacc-c-c++-common/vred2d-128.c -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test Same for C++. [-PASS:-]{+FAIL:+} libgomp.oacc-c++/ref-1.C -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O2 execution test [-PASS:-]{+WARNING: program timed out.+} {+FAIL:+} libgomp.oacc-fortran/gemm-2.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test [-PASS:-]{+WARNING: program timed out.+} {+FAIL:+} libgomp.oacc-fortran/gemm.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test [-PASS:-]{+FAIL:+} libgomp.oacc-fortran/parallel-dims.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O2 execution test [-PASS:-]{+FAIL:+} libgomp.oacc-fortran/parallel-dims.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O3 -fomit-frame-pointer -funroll-loops -fpeel-loops -ftracer -finline-functions execution test [-PASS:-]{+FAIL:+} libgomp.oacc-fortran/parallel-dims.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O3 -g execution test [-PASS:-]{+FAIL:+} libgomp.oacc-fortran/parallel-dims.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -Os execution test [-PASS:-]{+FAIL:+} libgomp.oacc-fortran/routine-7.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O0 execution test [-PASS:-]{+FAIL:+} libgomp.oacc-fortran/routine-7.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O1 execution test [-PASS:-]{+FAIL:+} libgomp.oacc-fortran/routine-7.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O2 execution test [-PASS:-]{+FAIL:+} libgomp.oacc-fortran/routine-7.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O3 -fomit-frame-pointer -funroll-loops -fpeel-loops -ftracer -finline-functions execution test [-PASS:-]{+FAIL:+} libgomp.oacc-fortran/routine-7.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -O3 -g execution test [-PASS:-]{+FAIL:+} libgomp.oacc-fortran/routine-7.f90 -DACC_DEVICE_TYPE_nvidia=1 -DACC_MEM_SHARED=0 -foffload=nvptx-none -Os execution test So that's "good": plenty of evidence that 'nvptx_uniform_warp_check' is necessary and working. Now, "the proposed patch". I'd like to make 'nvptx_uniform_warp_check' fit for non-full-warp execution. For example, to be able to execute such code in single-threaded 'cuLaunchKernel' for execution of global constructors/destructors, where those may, for example, call into nvptx target libraries compiled with '-mgomp' (thus, '-muniform-simt'). OK to push (after proper testing, and with TODO markers adjusted/removed) the attached "nvptx: Make 'nvptx_uniform_warp_check' fit for non-full-warp execution"? Grüße Thomas ----------------- Siemens Electronic Design Automation GmbH; Anschrift: Arnulfstraße 201, 80634 München; Gesellschaft mit beschränkter Haftung; Geschäftsführer: Thomas Heurung, Frank Thürauf; Sitz der Gesellschaft: München; Registergericht München, HRB 106955