public inbox for libc-help@sourceware.org
 help / color / mirror / Atom feed
* Query regarding malloc_trim
@ 2018-08-30 13:55 Sainath Latkar
  2018-08-31 19:01 ` Carlos O'Donell
  0 siblings, 1 reply; 2+ messages in thread
From: Sainath Latkar @ 2018-08-30 13:55 UTC (permalink / raw)
  To: libc-maintainers, libc-help

Hi GLIBC team,
I observed weird behavior with memory utilization of our application. The
issue narrows down to the usage of *std::queue* putting small chunks of
data in the queue and emptying the queue completely after some time.
In the life cycle of our application, a particular queue gets populated
with more than a million records and after around 7 to 10 minutes, we empty
the queue completely. At this point, logically the heap memory held by
these objects should be freed but it doesn’t happen. One more observation
is that if we put enough memory usage load on our system this surplus
memory goes into swap.
So the program is never freeing up the memory. This memory gets freed at
the end of .org applications life cycle. This indeed is not a memory leak,
because if we try to acquire more heap memory, application reuses the
surplus memory and doesn’t allocate more.

As I understood after reading up a bit, as part of optimization, Glibc
allocators cache this memory for later use. Using *malloc_trim* we can
return this heap memory to the system. I tried using *malloc_trim* and it
works as expected
My question is, is there any other way around this optimization, apart from
using malloc trim and *M_TRIM_THRESHOLD*? Also note that we are not using
c++ 11 so using Shriya to fit is out of the picture.


Thanks, Sainath Latkar.

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Query regarding malloc_trim
  2018-08-30 13:55 Query regarding malloc_trim Sainath Latkar
@ 2018-08-31 19:01 ` Carlos O'Donell
  0 siblings, 0 replies; 2+ messages in thread
From: Carlos O'Donell @ 2018-08-31 19:01 UTC (permalink / raw)
  To: Sainath Latkar, libc-maintainers, libc-help

On 08/30/2018 09:55 AM, Sainath Latkar wrote:
> I observed weird behavior with memory utilization of our application. The
> issue narrows down to the usage of *std::queue* putting small chunks of
> data in the queue and emptying the queue completely after some time.

How small?

> In the life cycle of our application, a particular queue gets populated
> with more than a million records and after around 7 to 10 minutes, we empty
> the queue completely. At this point, logically the heap memory held by
> these objects should be freed but it doesn’t happen. One more observation
> is that if we put enough memory usage load on our system this surplus
> memory goes into swap.

Can you confirm that you are measuring and observing RSS usage (not VSZ)?

> So the program is never freeing up the memory. This memory gets freed at
> the end of .org applications life cycle. This indeed is not a memory leak,
> because if we try to acquire more heap memory, application reuses the
> surplus memory and doesn’t allocate more.

Correct, this is a performance optimization.

> As I understood after reading up a bit, as part of optimization, Glibc
> allocators cache this memory for later use. Using *malloc_trim* we can
> return this heap memory to the system. I tried using *malloc_trim* and it
> works as expected

OK.

> My question is, is there any other way around this optimization, apart from
> using malloc trim and *M_TRIM_THRESHOLD*? Also note that we are not using
> c++ 11 so using Shriya to fit is out of the picture.

I assume you mean "shrink_to_fit"?

The system allocator has many tunable parameters that you must come to
understand, and set based on your application workload. If you want
optimal allocator performance you need to tune it based on the
upcoming workload usage (or as soon as you can).

If your objects fit into fastbins it may be the case that fastbins is
growing unbounded. You can test this theory by trying mallopt and set
M_MXFAST to 0 to disable fastbins (but leave tcache enabled which has
a fixed limit).

-- 
Cheers,
Carlos.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2018-08-31 19:01 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-30 13:55 Query regarding malloc_trim Sainath Latkar
2018-08-31 19:01 ` Carlos O'Donell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).