[ was: Re: [og7] Update nvptx_fork/join barrier placement ] On 03/19/2018 06:02 PM, Tom de Vries wrote: > I've got a tentative patch at > https://gcc.gnu.org/bugzilla/attachment.cgi?id=43707 ( PR84952 - > "[nvptx] bar.sync generated in divergent code" ). Tested on x86_64 with nvptx accelerator (in combination with a patch that verifies the positioning of bar.sync). Committed to stage4 trunk. [ Recap: Consider testcase workers.c: ... int main (void) { int a[10]; #pragma acc parallel loop worker for (int i = 0; i < 10; i++) a[i] = i; return 0; } ... At -O2, we generate (edited for readability): ... // BEGIN PREAMBLE .version 3.1 .target sm_30 .address_size 64 // END PREAMBLE // BEGIN FUNCTION DECL: main$_omp_fn$0 .entry main$_omp_fn$0 (.param .u64 %in_ar0); //:FUNC_MAP "main$_omp_fn$0", 0x1, 0x20, 0x20 // BEGIN VAR DEF: __worker_bcast .shared .align 8 .u8 __worker_bcast[8]; // BEGIN FUNCTION DEF: main$_omp_fn$0 .entry main$_omp_fn$0 (.param .u64 %in_ar0) { .reg .u64 %ar0; ld.param.u64 %ar0,[%in_ar0]; .reg .u32 %r24; .reg .u64 %r25; .reg .pred %r26; .reg .u64 %r27; .reg .u64 %r28; .reg .u64 %r29; .reg .u64 %r30; .reg .u64 %r31; .reg .u64 %r32; .reg .pred %r33; .reg .pred %r34; { .reg .u32 %y; mov.u32 %y,%tid.y; setp.ne.u32 %r34,%y,0; } { .reg .u32 %x; mov.u32 %x,%tid.x; setp.ne.u32 %r33,%x,0; } @ %r34 bra.uni $L6; @ %r33 bra $L7; mov.u64 %r25,%ar0; // fork 2; cvta.shared.u64 %r32,__worker_bcast; st.u64 [%r32],%r25; $L7: $L6: @ %r33 bra $L5; // forked 2; bar.sync 0; cvta.shared.u64 %r31,__worker_bcast; ld.u64 %r25,[%r31]; mov.u32 %r24,%tid.y; setp.le.s32 %r26,%r24,9; @ %r26 bra $L2; bra $L3; $L2: ld.u64 %r27,[%r25]; cvt.s64.s32 %r28,%r24; shl.b64 %r29,%r28,2; add.u64 %r30,%r27,%r29; st.u32 [%r30],%r24; $L3: bar.sync 1; // joining 2; $L5: @ %r34 bra.uni $L8; @ %r33 bra $L9; // join 2; $L9: $L8: ret; } ... The problem is the positioning of bar.sync, inside the vector-neutering branch "@ %r33 bra $L5". The documentation for bar.sync says: ... Barriers are executed on a per-warp basis as if all the threads in a warp are active. Thus, if any thread in a warp executes a bar instruction, it is as if all the threads in the warp have executed the bar instruction. All threads in the warp are stalled until the barrier completes, and the arrival count for the barrier is incremented by the warp size (not the number of active threads in the warp). In conditionally executed code, a bar instruction should only be used if it is known that all threads evaluate the condition identically (the warp does not diverge). ... The documentation is somewhat contradictory, in that it first explains that that it is executed on a per-warp basis (implying that only one thread executing it should be fine), but then goes on to state that it should not be executed in divergent mode (implying that all threads should execute it). Either way, the safest form of usage is: don't execute in divergent mode. As is evident from the example above, we do generate bar.sync in divergent mode, and patch below fixes that. With the patch, the difference in positioning of bar.sync is in the example above is: ... @@ -42,18 +42,18 @@ st.u64 [%r32], %r25; $L7: $L6: + bar.sync 0; @%r33 bra $L5; // forked 2; - bar.sync 0; cvta.shared.u64 %r31, __worker_bcast; ld.u64 %r25, [%r31]; mov.u32 %r24, %tid.y; setp.le.s32 %r26, %r24, 9; @%r26 bra $L2; $L3: - bar.sync 1; // joining 2; $L5: + bar.sync 1; @%r34 bra.uni $L8; @%r33 bra $L9; // join 2; ... ] Thanks, - Tom