From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 9460 invoked by alias); 14 Dec 2018 14:50:24 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 9059 invoked by uid 89); 14 Dec 2018 14:50:23 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-25.0 required=5.0 tests=BAYES_20,GIT_PATCH_0,GIT_PATCH_1,GIT_PATCH_2,GIT_PATCH_3,RCVD_IN_DNSWL_NONE,SPF_PASS autolearn=ham version=3.3.2 spammy=acknowledge, 0N, H*f:sk:yxfp7eg, 23411 X-HELO: relay1.mentorg.com Received: from relay1.mentorg.com (HELO relay1.mentorg.com) (192.94.38.131) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Fri, 14 Dec 2018 14:50:20 +0000 Received: from nat-ies.mentorg.com ([192.94.31.2] helo=svr-ies-mbx-01.mgc.mentorg.com) by relay1.mentorg.com with esmtps (TLSv1.2:ECDHE-RSA-AES256-SHA384:256) id 1gXonC-0004fh-Ef from Thomas_Schwinge@mentor.com for gcc-patches@gcc.gnu.org; Fri, 14 Dec 2018 06:50:18 -0800 Received: from hertz.schwinge.homeip.net (137.202.0.90) by svr-ies-mbx-01.mgc.mentorg.com (139.181.222.1) with Microsoft SMTP Server (TLS) id 15.0.1320.4; Fri, 14 Dec 2018 14:50:14 +0000 From: Thomas Schwinge To: Chung-Lin Tang CC: Subject: Re: Too strict synchronization with the local (host) thread? In-Reply-To: References: <8086c63b-f729-891b-3d21-76871d360734@mentor.com> User-Agent: Notmuch/0.9-101-g81dad07 (http://notmuchmail.org) Emacs/25.2.2 (x86_64-pc-linux-gnu) Date: Fri, 14 Dec 2018 14:50:00 -0000 Message-ID: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-SW-Source: 2018-12/txt/msg01065.txt.bz2 Hi Chung-Lin! On Tue, 11 Dec 2018 21:30:31 +0800, Chung-Lin Tang wrote: > On 2018/12/7 11:56 PM, Thomas Schwinge wrote: > >> --- a/libgomp/testsuite/libgomp.oacc-c-c++-common/lib-79.c > >> +++ b/libgomp/testsuite/libgomp.oacc-c-c++-common/lib-79.c > >> @@ -114,6 +114,7 @@ main (int argc, char **argv) > >>=20=20=20 > >> for (i =3D 0; i < N; i++) > >> { > >> + stream =3D (CUstream) acc_get_cuda_stream (i & 1); > >> r =3D cuLaunchKernel (delay, 1, 1, 1, 1, 1, 1, 0, stream, karg= s, 0); > > What's the motivation for this change? >=20 > To place work on both streams 0 and 1. That's describing what it doesn, not the motivation behind it. ;-) > > ..., and this change are needed because we're now more strictly > > synchronizing with the local (host) thread. > >=20 > > Regarding the case of "libgomp.oacc-c-c++-common/lib-81.c", as currently > > present: > >=20 > > [...] > > for (i =3D 0; i < N; i++) > > { > > r =3D cuLaunchKernel (delay, 1, 1, 1, 1, 1, 1, 0, streams[i]= , kargs, 0); > > if (r !=3D CUDA_SUCCESS) > > { > > fprintf (stderr, "cuLaunchKernel failed: %d\n", r); > > abort (); > > } > > } > >=20 > > This launches N kernels on N separate async queues/CUDA streams, [0..N). > >=20 > > acc_wait_all_async (N); > >=20 > > Then, the "acc_wait_all_async (N)" -- in my understanding! -- should > > *not* synchronize with the local (host) thread, but instead just set up > > the additional async queue/CUDA stream N to "depend" on [0..N). > >=20 > > for (i =3D 0; i <=3D N; i++) > > { > > if (acc_async_test (i) !=3D 0) > > abort (); > > } > >=20 > > Thus, all [0..N) should then still be "acc_async_test (i) !=3D 0" (still > > running). > >=20 > > acc_wait (N); > >=20 > > Here, the "acc_wait (N)" would synchronize the local (host) thread with > > async queue/CUDA stream N and thus recursively with [0..N). > >=20 > > for (i =3D 0; i <=3D N; i++) > > { > > if (acc_async_test (i) !=3D 1) > > abort (); > > } > > [...] > >=20 > > So, then all these async queues/CUDA streams here indeed are > > "acc_async_test (i) !=3D 1", thas is, idle. > >=20 > >=20 > > Now, the more strict synchronization with the local (host) thread is not > > wrong in term of correctness, but I suppose it will impact performance = of > > otherwise asynchronous operations, which now get synchronized too much? > >=20 > > Or, of course, I'm misunderstanding something... >=20 > IIRC, we encountered many issues where people misunderstood the meaning o= f "wait+async", > using it as if the local host sync happened, where in our original implem= entation it does not. ..., and that's the right thing, in my opinion. (Do you disagree?) > Also some areas of the OpenACC spec were vague on whether the local host = synchronization should > or should not happen; basically, the wording treated as if it was only an= implementation detail > and didn't matter, and didn't acknowledge that this would be something vi= sible to the user. I suppose in correct code that correctly uses a different mechanism for inter-thread synchronization, it shouldn't be visible? (Well, with the additional synchronization, it would be visible in terms of performance degradation.) For example, OpenACC 2.6, 3.2.11. "acc_wait" explicitly states that "If two or more threads share the same accelerator, the 'acc_wait' routine will return only if all matching asynchronous operations initiated by this thread have completed; there is no guarantee that all matching asynchronous operations initiated by other threads have completed". I agree that this could be made more explicit throught the specification, and also the reading of OpenACC 2.6, 2.16.1. "async clause" is a bit confusing regarding multiple host threads, but as I understand, the idea still is that such wait operations do not synchronize at the host thread level. (Let's please assume that, and then work with the OpenACC technical committee to get that clarified in the documentation.) > At the end, IIRC, I decided that adding a local host synchronization is e= asier for all of us, Well... > and took the opportunity of the re-org to make this change. Well... Again, a re-org/re-work should not make such functional changes... > That said, I didn't notice those tests you listed above were meant to tes= t such delicate behavior. >=20 > > (For avoidance of doubt, I would accept the "async re-work" as is, but = we > > should eventually clarify this, and restore the behavior we -- apparent= ly > > -- had before, where we didn't synchronize so much? (So, technically, > > the "async re-work" would constitute a regression for this kind of > > usage?) >=20 > It's not hard to restore the old behavior, just a few lines to delete. Al= though as described > above, this change was deliberate. >=20 > This might be another issue to raise with the committee. I think I tried = on this exact issue > a long time ago, but never got answers. OK, I'll try to find that, or send me a pointer to it, if you still got. I propose you include the following. Would you please review the "TODO" comments, and again also especially review the "libgomp/oacc-parallel.c:goacc_wait" change, and confirm no corresponding "libgomp/oacc-parallel.c:GOACC_wait" change to be done, because that code is structured differently. commit e44cc6dc8f76e50c6f905cd408475589dee7b3b1 Author: Thomas Schwinge Date: Thu Dec 13 17:54:35 2018 +0100 into async re-work: don't synchronize with the local thread unless actu= ally necessary --- libgomp/oacc-async.c | 8 ++++++-- libgomp/oacc-parallel.c | 1 - 2 files changed, 6 insertions(+), 3 deletions(-) diff --git libgomp/oacc-async.c libgomp/oacc-async.c index a38e42781aa0..ec5cbc408d4e 100644 --- libgomp/oacc-async.c +++ libgomp/oacc-async.c @@ -195,9 +195,11 @@ acc_wait_async (int async1, int async2) if (aq1 =3D=3D aq2) gomp_fatal ("identical parameters"); =20 - thr->dev->openacc.async.synchronize_func (aq1); if (aq2) thr->dev->openacc.async.serialize_func (aq1, aq2); + else + //TODO Local thread synchronization. Necessary for the "async2 =3D=3D= acc_async_sync" case, or can just skip? + thr->dev->openacc.async.synchronize_func (aq1); } =20 void @@ -232,9 +234,11 @@ acc_wait_all_async (int async) gomp_mutex_lock (&thr->dev->openacc.async.lock); for (goacc_aq_list l =3D thr->dev->openacc.async.active; l; l =3D l->nex= t) { - thr->dev->openacc.async.synchronize_func (l->aq); if (waiting_queue) thr->dev->openacc.async.serialize_func (l->aq, waiting_queue); + else + //TODO Local thread synchronization. Necessary for the "async =3D=3D acc= _async_sync" case, or can just skip? + thr->dev->openacc.async.synchronize_func (l->aq); } gomp_mutex_unlock (&thr->dev->openacc.async.lock); } diff --git libgomp/oacc-parallel.c libgomp/oacc-parallel.c index 9519abeccc2c..5a441c9efe38 100644 --- libgomp/oacc-parallel.c +++ libgomp/oacc-parallel.c @@ -508,7 +508,6 @@ goacc_wait (int async, int num_waits, va_list *ap) else { goacc_aq aq2 =3D get_goacc_asyncqueue (async); - acc_dev->openacc.async.synchronize_func (aq); acc_dev->openacc.async.serialize_func (aq, aq2); } } Gr=C3=BC=C3=9Fe Thomas