public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Andrew Stubbs <ams@codesourcery.com>
To: Thomas Schwinge <thomas@codesourcery.com>
Cc: Jakub Jelinek <jakub@redhat.com>,
	Tobias Burnus <tobias@codesourcery.com>,
	<gcc-patches@gcc.gnu.org>
Subject: Re: [og12] Attempt to register OpenMP pinned memory using a device instead of 'mlock' (was: [PATCH] libgomp, openmp: pinned memory)
Date: Mon, 20 Feb 2023 09:48:53 +0000	[thread overview]
Message-ID: <10037b90-784c-68c1-4299-ac98624e77ec@codesourcery.com> (raw)
In-Reply-To: <87fsb4vhfs.fsf@euler.schwinge.homeip.net>

On 17/02/2023 08:12, Thomas Schwinge wrote:
> Hi Andrew!
> 
> On 2023-02-16T23:06:44+0100, I wrote:
>> On 2023-02-16T16:17:32+0000, "Stubbs, Andrew via Gcc-patches" <gcc-patches@gcc.gnu.org> wrote:
>>> The mmap implementation was not optimized for a lot of small allocations, and I can't see that issue changing here
>>
>> That's correct, 'mmap' remains.  Under the hood, 'cuMemHostRegister' must
>> surely also be doing some 'mlock'-like thing, so I figured it's best to
>> feed page-boundary memory regions to it, which 'mmap' gets us.
>>
>>> so I don't know if this can be used for mlockall replacement.
>>>
>>> I had assumed that using the Cuda allocator would fix that limitation.
>>
>>  From what I've read (but no first-hand experiments), there's non-trivial
>> overhead with 'cuMemHostRegister' (just like with 'mlock'), so routing
>> all small allocations individually through it probably isn't a good idea
>> either.  Therefore, I suppose, we'll indeed want to use some local
>> allocator if we wish this "optimized for a lot of small allocations".
> 
> Eh, I suppose your point indirectly was that instead of 'mmap' plus
> 'cuMemHostRegister' we ought to use 'cuMemAllocHost'/'cuMemHostAlloc', as
> we assume those already do implement such a local allocator.  Let me
> quickly change that indeed -- we don't currently have a need to use
> 'cuMemHostRegister' instead of 'cuMemAllocHost'/'cuMemHostAlloc'.


Yes, that's right. I suppose it makes sense to register memory we 
already have, but if we want new memory then trying to reinvent what 
happens inside cuMemAllocHost is pointless.

Andrew

  reply	other threads:[~2023-02-20  9:49 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-04 15:32 [PATCH] libgomp, openmp: pinned memory Andrew Stubbs
2022-01-04 15:55 ` Jakub Jelinek
2022-01-04 16:58   ` Andrew Stubbs
2022-01-04 18:28     ` Jakub Jelinek
2022-01-04 18:47       ` Jakub Jelinek
2022-01-05 17:07         ` Andrew Stubbs
2022-01-13 13:53           ` Andrew Stubbs
2022-06-07 11:05             ` Andrew Stubbs
2022-06-07 12:10               ` Jakub Jelinek
2022-06-07 12:28                 ` Andrew Stubbs
2022-06-07 12:40                   ` Jakub Jelinek
2022-06-09  9:38                   ` Thomas Schwinge
2022-06-09 10:09                     ` Tobias Burnus
2022-06-09 10:22                       ` Stubbs, Andrew
2022-06-09 10:31                     ` Stubbs, Andrew
2023-02-16 15:32                     ` Attempt to register OpenMP pinned memory using a device instead of 'mlock' (was: [PATCH] libgomp, openmp: pinned memory) Thomas Schwinge
2023-02-16 16:17                       ` Stubbs, Andrew
2023-02-16 22:06                         ` [og12] " Thomas Schwinge
2023-02-17  8:12                           ` Thomas Schwinge
2023-02-20  9:48                             ` Andrew Stubbs [this message]
2023-02-20 13:53                               ` [og12] Attempt to not just register but allocate OpenMP pinned memory using a device (was: [og12] Attempt to register OpenMP pinned memory using a device instead of 'mlock') Thomas Schwinge
2023-02-10 15:11             ` [PATCH] libgomp, openmp: pinned memory Thomas Schwinge
2023-02-10 15:55               ` Andrew Stubbs
2023-02-16 21:39             ` [og12] Clarify/verify OpenMP 'omp_calloc' zero-initialization for pinned memory (was: [PATCH] libgomp, openmp: pinned memory) Thomas Schwinge
2023-03-24 15:49 ` [og12] libgomp: Document OpenMP 'pinned' memory (was: [PATCH] libgomp, openmp: pinned memory Thomas Schwinge
2023-03-27  9:27   ` Stubbs, Andrew
2023-03-27 11:26     ` [og12] libgomp: Document OpenMP 'pinned' memory (was: [PATCH] libgomp, openmp: pinned memory) Thomas Schwinge
2023-03-27 12:01       ` Andrew Stubbs

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=10037b90-784c-68c1-4299-ac98624e77ec@codesourcery.com \
    --to=ams@codesourcery.com \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=jakub@redhat.com \
    --cc=thomas@codesourcery.com \
    --cc=tobias@codesourcery.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).