public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
From: Andrew Stubbs <ams@codesourcery.com>
To: Jakub Jelinek <jakub@redhat.com>
Cc: "gcc-patches@gcc.gnu.org" <gcc-patches@gcc.gnu.org>
Subject: Re: [PATCH] libgomp, openmp: pinned memory
Date: Tue, 4 Jan 2022 16:58:19 +0000	[thread overview]
Message-ID: <48ee767a-0d90-53b4-ea54-9deba9edd805@codesourcery.com> (raw)
In-Reply-To: <20220104155558.GG2646553@tucnak>

On 04/01/2022 15:55, Jakub Jelinek wrote:
> The usual libgomp way of doing this wouldn't be to use #ifdef __linux__, but
> instead add libgomp/config/linux/allocator.c that includes some headers,
> defines some macros and then includes the generic allocator.c.

OK, good point, I can do that.

> I think perror is the wrong thing to do, omp_alloc etc. has a well defined
> interface what to do in such cases - the allocation should just fail (not be
> allocated) and depending on user's choice that can be fatal, or return NULL,
> or chain to some other allocator with other properties etc.

I did it this way because pinning feels more like an optimization, and 
falling back to "just works" seemed like what users would want to 
happen. The perror was added because it turns out the default ulimit is 
tiny and I wanted to hint at the solution.

I guess you're right that the consistent behaviour would be to silently 
switch to the fallback allocator, but it still feels like users will be 
left in the dark about why it failed.

> Other issues in the patch are that it doesn't munlock on deallocation and
> that because of that deallocation we need to figure out what to do on page
> boundaries.  As documented, mlock can be passed address and/or address +
> size that aren't at page boundaries and pinning happens even just for
> partially touched pages.  But munlock unpins also even the partially
> overlapping pages and we don't know at that point whether some other pinned
> allocations don't appear in those pages.

Right, it doesn't munlock because of these issues. I don't know of any 
way to solve this that wouldn't involve building tables of locked ranges 
(and knowing what the page size is).

I considered using mmap with the lock flag instead, but the failure mode 
looked unhelpful. I guess we could mmap with the regular flags, then 
mlock after. That should bypass the regular heap and ensure each 
allocation has it's own page. I'm not sure what the unintended 
side-effects of that might be.

> Some bad options are only pin pages wholy contained within the allocation
> and don't pin partial pages around it, force at least page alignment and
> size so that everything can be pinned, somehow ensure that we never allocate
> more than one pinned allocation in such partial pages (but can allocate
> there non-pinned allocations), or e.g. use some internal data structure to
> track how many pinned allocations are on the partial pages (say a hash map
> from page start address to a counter how many pinned allocations are there,
> if it goes to 0 munlock even that page, otherwise munlock just the wholy
> contained pages), or perhaps use page size aligned allocation and size and
> just remember in some data structure that the partial pages could be used
> for other pinned (small) allocations.

Bad options indeed. If any part of the memory block is not pinned I 
expect no performance gains whatsoever. And all this other business adds 
complexity and runtime overhead.

For version 1.0 it feels reasonable to omit the unlock step and hope 
that a) pinned data will be long-lived, or b) short-lived pinned data 
will be replaced with more data that -- most likely -- occupies the same 
pages.

Similarly, it seems likely that serious HPC applications will run on 
devices with lots of RAM, and if not any page swapping will destroy the 
performance gains of using OpenMP.

For now I'll just fix the architectural issues.

Andrew

  reply	other threads:[~2022-01-04 16:58 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-04 15:32 Andrew Stubbs
2022-01-04 15:55 ` Jakub Jelinek
2022-01-04 16:58   ` Andrew Stubbs [this message]
2022-01-04 18:28     ` Jakub Jelinek
2022-01-04 18:47       ` Jakub Jelinek
2022-01-05 17:07         ` Andrew Stubbs
2022-01-13 13:53           ` Andrew Stubbs
2022-06-07 11:05             ` Andrew Stubbs
2022-06-07 12:10               ` Jakub Jelinek
2022-06-07 12:28                 ` Andrew Stubbs
2022-06-07 12:40                   ` Jakub Jelinek
2022-06-09  9:38                   ` Thomas Schwinge
2022-06-09 10:09                     ` Tobias Burnus
2022-06-09 10:22                       ` Stubbs, Andrew
2022-06-09 10:31                     ` Stubbs, Andrew
2023-02-16 15:32                     ` Attempt to register OpenMP pinned memory using a device instead of 'mlock' (was: [PATCH] libgomp, openmp: pinned memory) Thomas Schwinge
2023-02-16 16:17                       ` Stubbs, Andrew
2023-02-16 22:06                         ` [og12] " Thomas Schwinge
2023-02-17  8:12                           ` Thomas Schwinge
2023-02-20  9:48                             ` Andrew Stubbs
2023-02-20 13:53                               ` [og12] Attempt to not just register but allocate OpenMP pinned memory using a device (was: [og12] Attempt to register OpenMP pinned memory using a device instead of 'mlock') Thomas Schwinge
2023-02-10 15:11             ` [PATCH] libgomp, openmp: pinned memory Thomas Schwinge
2023-02-10 15:55               ` Andrew Stubbs
2023-02-16 21:39             ` [og12] Clarify/verify OpenMP 'omp_calloc' zero-initialization for pinned memory (was: [PATCH] libgomp, openmp: pinned memory) Thomas Schwinge
2023-03-24 15:49 ` [og12] libgomp: Document OpenMP 'pinned' memory (was: [PATCH] libgomp, openmp: pinned memory Thomas Schwinge
2023-03-27  9:27   ` Stubbs, Andrew
2023-03-27 11:26     ` [og12] libgomp: Document OpenMP 'pinned' memory (was: [PATCH] libgomp, openmp: pinned memory) Thomas Schwinge
2023-03-27 12:01       ` Andrew Stubbs

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=48ee767a-0d90-53b4-ea54-9deba9edd805@codesourcery.com \
    --to=ams@codesourcery.com \
    --cc=gcc-patches@gcc.gnu.org \
    --cc=jakub@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).