public inbox for gcc@gcc.gnu.org
 help / color / mirror / Atom feed
From: Jakub Jelinek <jakub@redhat.com>
To: Richard Biener <richard.guenther@gmail.com>
Cc: "Michael V. Zolotukhin" <michael.v.zolotukhin@gmail.com>,
	       Kirill Yukhin <kirill.yukhin@gmail.com>,
	       Richard Henderson <rth@redhat.com>,
	GCC Development <gcc@gcc.gnu.org>,
	       Torvald Riegel <triegel@redhat.com>
Subject: Re: [RFC] Offloading Support in libgomp
Date: Wed, 28 Aug 2013 17:15:00 -0000	[thread overview]
Message-ID: <20130828113759.GU21876@tucnak.zalov.cz> (raw)
In-Reply-To: <CAFiYyc3kjVjO9SqxUbkAAChfKMR8SO=8Mwe5TnOWr-r_Qdv0RQ@mail.gmail.com>

On Wed, Aug 28, 2013 at 01:21:53PM +0200, Richard Biener wrote:
> My thought was that we need to have control over scheduling and thus have
> a single runtime to be able to execute the following in parallel on the
> accelerator and the CPU:
> 
> #pragma omp parallel
> {
> #pragma omp target
>    for (;;)
>      ...
> #pragma omp for
>   for (;;)
>      ...
> }
> #pragma omp wait
> 
> that is, the omp target dispatch may not block the CPU.  I can hardly

OpenMP #pragma omp target blocks the host CPU until the accelerator code
finishes.  So if the goal is to spawn some accelerator code in parallel with
parallelized host code, you'd need to make the code more complicated.
I guess you could
#pragma omp parallel
{
#pragma omp single
#pragma omp target
{
#pragma omp parallel
...
}
#pragma omp for schedule(dynamic, N)
for (;;)
...
}
or similar, then only one of the host parallel threads would spawn the
target code, wait for it to be done and other threads in the mean time
would do the worksharing (and the dynamic schedule would make sure that
if the target region took long time, then no work or almost no work would be
scheduled for the thread executing the target region).

> > In the Intel MIC case (the only thing I've looked briefly at for how the
> > offloading works - the COI library) you can load binaries and shared
> > libraries either from files or from host memory image, so e.g. you can
> > embed the libgomp library, some kind of libm and some kind of libc
> > (would that be glibc, newlib, something else?) compiled for the target
> > into some data section inside of the plugin or something
> > (or load it from files of course).  No idea how you do this in the
> > HSAIL case, or PTX.
> 
> For HSA you can do arbitrary calls to CPU code (that will then of course
> execute on the CPU).

GCC compiles into assembly or bytecode for HSAIL, right, and that then is
further processed by some (right now proprietary?) blob.  The question is
does this allow linking of multiple HSAIL bytecode objects/libraries, etc.
Say you have something providing (a subset of) C library, math library,
libgomp, then say for OpenMP one host shared library provides some
#pragma omp declare target
...
#pragma omp end declare target
routine, and another shared library uses #pragma omp target and calls that
routine from there.  So, I'd assume you have some HSAIL assembly/bytecode
in each of the shared libraries, can you link that together and tell the
runtime to execute some (named?) routine in there?

	Jakub

  reply	other threads:[~2013-08-28 11:38 UTC|newest]

Thread overview: 56+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-08-22 22:37 Michael V. Zolotukhin
2013-08-23  0:22 ` Jakub Jelinek
2013-08-23 12:16   ` Michael V. Zolotukhin
2013-08-23 12:37     ` Jakub Jelinek
2013-08-24  6:17       ` Michael V. Zolotukhin
2013-08-25 16:24         ` Jakub Jelinek
2013-08-27  0:36           ` Michael V. Zolotukhin
2013-08-27  0:38             ` Jakub Jelinek
2013-08-27  6:16               ` Michael V. Zolotukhin
2013-08-27  8:06                 ` Jakub Jelinek
2013-08-27 15:47                   ` Michael V. Zolotukhin
2013-08-27 16:22                     ` Jakub Jelinek
2013-08-27 19:54                       ` Michael V. Zolotukhin
2013-08-28 11:21                         ` Jakub Jelinek
2013-08-29 10:44                           ` Michael V. Zolotukhin
2013-09-10 15:02                           ` Michael V. Zolotukhin
2013-09-10 15:15                             ` Jakub Jelinek
2013-09-10 15:31                               ` Michael V. Zolotukhin
2013-09-10 15:36                                 ` Jakub Jelinek
2013-09-10 15:38                                   ` Michael V. Zolotukhin
2013-09-13 11:30                                     ` Michael V. Zolotukhin
2013-09-13 12:36                                       ` Jakub Jelinek
2013-09-13 13:11                                         ` Michael V. Zolotukhin
2013-09-13 13:16                                           ` Jakub Jelinek
2013-09-13 15:09                                             ` Ilya Tocar
2013-09-13 15:34                                         ` Jakub Jelinek
2014-07-17  7:52                                       ` Thomas Schwinge
2014-07-17 12:30                                         ` Ilya Verbin
2014-07-17 12:37                                           ` Jakub Jelinek
2014-07-17 12:58                                             ` Thomas Schwinge
2014-07-17 13:09                                               ` Thomas Schwinge
2014-07-17 13:35                                                 ` Jakub Jelinek
2014-07-17 14:37                                                   ` Thomas Schwinge
2013-09-13  9:35                         ` Michael Zolotukhin
2013-09-13 10:52                           ` Kirill Yukhin
2013-09-13 11:04                           ` Nathan Sidwell
2013-09-13 11:21                             ` Michael V. Zolotukhin
2013-09-16  9:35                           ` Jakub Jelinek
2013-09-17 12:05                             ` Michael V. Zolotukhin
2013-09-17 12:30                               ` Jakub Jelinek
2013-10-28 10:43                                 ` Ilya Verbin
2013-10-29  8:04                                   ` Jakub Jelinek
2014-01-31 18:03                                     ` Ilya Verbin
2014-01-31 19:43                                       ` Jakub Jelinek
2014-02-14 15:24                                         ` Ilya Verbin
2014-02-14 15:43                                           ` Jakub Jelinek
2014-02-14 18:54                                             ` Richard Henderson
2014-02-17 15:59                                             ` Ilya Verbin
2014-02-17 16:03                                               ` Jakub Jelinek
2013-08-28 12:56             ` Richard Biener
2013-08-28 15:26               ` Jakub Jelinek
2013-08-28 17:03                 ` Richard Biener
2013-08-28 17:15                   ` Jakub Jelinek [this message]
2013-08-29 21:09                     ` Richard Biener
2013-08-28 18:54                   ` Torvald Riegel
2013-08-28 18:43                 ` Torvald Riegel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130828113759.GU21876@tucnak.zalov.cz \
    --to=jakub@redhat.com \
    --cc=gcc@gcc.gnu.org \
    --cc=kirill.yukhin@gmail.com \
    --cc=michael.v.zolotukhin@gmail.com \
    --cc=richard.guenther@gmail.com \
    --cc=rth@redhat.com \
    --cc=triegel@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).