public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Andrew Stubbs <ams@codesourcery.com>
To: Jakub Jelinek <jakub@redhat.com>,
	"gcc-patches@gcc.gnu.org" <gcc-patches@gcc.gnu.org>
Subject: Re: [PATCH] libgomp, openmp: pinned memory
Date: Wed, 5 Jan 2022 17:07:05 +0000	[thread overview]
Message-ID: <b59981ce-9e47-8b00-03b8-1a9a5d555bb7@codesourcery.com> (raw)
In-Reply-To: <20220104184740.GL2646553@tucnak>

On 04/01/2022 18:47, Jakub Jelinek wrote:
> On Tue, Jan 04, 2022 at 07:28:29PM +0100, Jakub Jelinek via Gcc-patches wrote:
>>>> Other issues in the patch are that it doesn't munlock on deallocation and
>>>> that because of that deallocation we need to figure out what to do on page
>>>> boundaries.  As documented, mlock can be passed address and/or address +
>>>> size that aren't at page boundaries and pinning happens even just for
>>>> partially touched pages.  But munlock unpins also even the partially
>>>> overlapping pages and we don't know at that point whether some other pinned
>>>> allocations don't appear in those pages.
>>>
>>> Right, it doesn't munlock because of these issues. I don't know of any way
>>> to solve this that wouldn't involve building tables of locked ranges (and
>>> knowing what the page size is).
>>>
>>> I considered using mmap with the lock flag instead, but the failure mode
>>> looked unhelpful. I guess we could mmap with the regular flags, then mlock
>>> after. That should bypass the regular heap and ensure each allocation has
>>> it's own page. I'm not sure what the unintended side-effects of that might
>>> be.
>>
>> But the munlock is even more important because of the low ulimit -l, because
>> if munlock isn't done on deallocation, the by default I think 64KB limit
>> will be reached even much earlier.  If most users have just 64KB limit on
>> pinned memory per process, then that most likely asks for grabbing such memory
>> in whole pages and doing memory management on that resource.
>> Because vasting that precious memory on the partial pages which will most
>> likely get non-pinned allocations when we just have 16 such pages is a big
>> waste.
> 
> E.g. if we start using (dynamically, using dlopen/dlsym etc.) the memkind
> library for some of the allocators, for the pinned memory we could use
> e.g. the memkind_create_fixed API - on the first pinned allocation, check
> what is the ulimit -l and if it is fairly small, mmap PROT_NONE the whole
> pinned size (but don't pin it whole at start, just whatever we need as we
> go).

I don't believe 64KB will be anything like enough for any real HPC 
application. Is it really worth optimizing for this case?

Anyway, I'm working on an implementation using mmap instead of malloc 
for pinned allocations. I figure that will simplify the unpin algorithm 
(because it'll be munmap) and optimize for large allocations such as I 
imagine HPC applications will use. It won't fix the ulimit issue.

Andrew

  reply	other threads:[~2022-01-05 17:07 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-04 15:32 Andrew Stubbs
2022-01-04 15:55 ` Jakub Jelinek
2022-01-04 16:58   ` Andrew Stubbs
2022-01-04 18:28     ` Jakub Jelinek
2022-01-04 18:47       ` Jakub Jelinek
2022-01-05 17:07         ` Andrew Stubbs [this message]
2022-01-13 13:53           ` Andrew Stubbs
2022-06-07 11:05             ` Andrew Stubbs
2022-06-07 12:10               ` Jakub Jelinek
2022-06-07 12:28                 ` Andrew Stubbs
2022-06-07 12:40                   ` Jakub Jelinek
2022-06-09  9:38                   ` Thomas Schwinge
2022-06-09 10:09                     ` Tobias Burnus
2022-06-09 10:22                       ` Stubbs, Andrew
2022-06-09 10:31                     ` Stubbs, Andrew
2023-02-16 15:32                     ` Attempt to register OpenMP pinned memory using a device instead of 'mlock' (was: [PATCH] libgomp, openmp: pinned memory) Thomas Schwinge
2023-02-16 16:17                       ` Stubbs, Andrew
2023-02-16 22:06                         ` [og12] " Thomas Schwinge
2023-02-17  8:12                           ` Thomas Schwinge
2023-02-20  9:48                             ` Andrew Stubbs
2023-02-20 13:53                               ` [og12] Attempt to not just register but allocate OpenMP pinned memory using a device (was: [og12] Attempt to register OpenMP pinned memory using a device instead of 'mlock') Thomas Schwinge
2023-02-10 15:11             ` [PATCH] libgomp, openmp: pinned memory Thomas Schwinge
2023-02-10 15:55               ` Andrew Stubbs
2023-02-16 21:39             ` [og12] Clarify/verify OpenMP 'omp_calloc' zero-initialization for pinned memory (was: [PATCH] libgomp, openmp: pinned memory) Thomas Schwinge
2023-03-24 15:49 ` [og12] libgomp: Document OpenMP 'pinned' memory (was: [PATCH] libgomp, openmp: pinned memory Thomas Schwinge
2023-03-27  9:27   ` Stubbs, Andrew
2023-03-27 11:26     ` [og12] libgomp: Document OpenMP 'pinned' memory (was: [PATCH] libgomp, openmp: pinned memory) Thomas Schwinge
2023-03-27 12:01       ` Andrew Stubbs

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b59981ce-9e47-8b00-03b8-1a9a5d555bb7@codesourcery.com \
    --to=ams@codesourcery.com \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=jakub@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).