public inbox for libc-help@sourceware.org
 help / color / mirror / Atom feed
From: Carlos O'Donell <carlos@redhat.com>
To: Christian Hoff <christian_hoff@gmx.net>,
	libc-help <libc-help@sourceware.org>
Subject: Re: Excessive memory consumption when using malloc()
Date: Thu, 25 Nov 2021 13:20:18 -0500	[thread overview]
Message-ID: <5a2c2e65-241a-2b22-bb8d-87c18768145e@redhat.com> (raw)
In-Reply-To: <bb70214a-029a-df1f-983e-87a8d3c05d58@gmx.net>

On 11/25/21 12:20, Christian Hoff via Libc-help wrote:
> Hello all,
>
> we are facing the a problem with the memory allocator in glibc 2.17 on
> RHEL 7.9. Or application allocates about 10 GB of memory (split into
> chunks that are each around 512 KB large). This memory is used for some
> computations and released afterwards. After a while, the application is
> running the same computations again, but this time in different threads.
> The first issue we are seeing is that - after the computations are done
> - the 10 GB of memory is not released back to the operating system. Only
> after calling malloc_trim() manually with GDB, the size of the process
> shrinks dramatically from ~10GB to 400 MB. So, at this point, the unused
> memory from the computations is finally returned to the OS.

How many cpus does the system have?

How many threads do you create?

Is this 10GiB of RSS or VSS?

For very large systems glibc malloc will create up to 8 arenas per CPU.

Each arena starts with a default 64MiB VMA reservation.

On a 128 core system this appears as a ~65GiB VSS reservation.
 
> Our wish would be that the memory is returned to the OS without us
> having to call malloc_trim(). And I understand that glibc also trims the
> heap when there is sufficient free space in top of it (the
> M_TRIM_THRESHOLD in mallopt() controls when this should happen). What
> could be the reason why this is not working in our case? Could it be
> related to heap fragmentation? But assuming that is the reason, why is
> malloc_trim() nevertheless able to free this memory?

The normal trimming strategy is to trim from the top of the heap down.

Chunks at the top of the heap are coalesced and eventually when the chunk is big enough
the heap is freed down.

This coalescing and freeing is prevented it there are in-use chunks in the heap.

Consider this scenario:
- Make many large allocations that have a short lifetime.
- Make one small allocation that has a very long lifetime.
- Free all the large allocations.

The heap cannot be freed downwards because of the small long liftetime allocation.

The call to malloc_trim() walks the heap chunks and frees page-sized chunks or
larger without the requirement that they come from the top of the heap.

In glibc's allocator, mixing lifetimes for allocations will cause heap growth.

I have an important question to ask now:

Do you use aligned allocations?

We have right now an outstanding defect where aligned allocations create small
residual free chunks, and when free'd back and allocated again as an aligned
chunk, we are forced to split chunks again, which can lead to ratcheting effects
with certain aligned allocations.

We had a prototype patch for this in Fedora in 2019:
https://lists.fedoraproject.org/archives/list/glibc@lists.fedoraproject.org/thread/2PCHP5UWONIOAEUG34YBAQQYD7JL5JJ4/
 
> And then we also have one other problem. The first run of the
> computations is always fine: we allocate 10 GB of memory and the
> application grows to 10 GB. Afterwards, we release those 10 GB of memory
> since the computations are now done and at this point the freed memory
> is returned back to the allocator (however, the size of the process
> remains 10 GB unless we call malloc_trim()). But if we now re-run the
> same computations again a second time (this time using different
> threads), a problem occurs. In this case, the size of the application
> grows well beyond 10 GB. It can get 20 GB or larger and the process is
> eventually killed because the system runs out of memory.

You need to determine what is going on under the hood here.

You may want to just use malloc_info() to get a routine dump of the heap state.

This will give us a starting point to see what is growing.

We have a malloc allocation tracer that you can use to capture a workload and
share a snapshot of the workload with upstream:
https://pagure.io/glibc-malloc-trace-utils

Sharing the workload might be hard because this is a full API trace and it gets
difficult to share.

> Do you have any idea why this happens? To me it seems like the threads
> are assigned to different arenas and therefore the previously freed 10
> GB of memory can not be re-used as they are in different arenas. Is that
> possible?

I don't know why this happens.

Threads once bound to an arena are normally never move unless an allocation fails.
 
> A workaround I have found is to set M_MMAP_THRESHOLD to 128 KB - then
> the memory for the computations is always allocated using mmap() and
> returned back to the system immediately when it is free()'ed. This
> solves both of the issues. But I am afraid that this workaround could
> degrade the performance of our application. So, we are grateful for any
> better solution to this problem.

It will degrade performance because you must do a syscall all the time. You can try
raising the value.

-- 
Cheers,
Carlos.


  parent reply	other threads:[~2021-11-25 18:20 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-25 17:20 Christian Hoff
2021-11-25 17:46 ` Konstantin Kharlamov
2021-11-25 18:12   ` Konstantin Kharlamov
2021-11-25 18:21     ` Carlos O'Donell
2021-11-25 20:56       ` Adhemerval Zanella
2021-11-26 18:10         ` Christian Hoff
2021-11-29 17:06           ` Patrick McGehearty
2021-11-25 18:20 ` Carlos O'Donell [this message]
2021-11-26 17:58   ` Christian Hoff
2021-11-29 19:44     ` Christian Hoff

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5a2c2e65-241a-2b22-bb8d-87c18768145e@redhat.com \
    --to=carlos@redhat.com \
    --cc=christian_hoff@gmx.net \
    --cc=libc-help@sourceware.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).