public inbox for libc-help@sourceware.org
 help / color / mirror / Atom feed
* Abnormal memory usage with glibc 2.31 related to thread cache and trimming strategy ?
@ 2020-09-16 12:14 Xavier Roche
  2020-09-18 16:44 ` Konstantin Kharlamov
  0 siblings, 1 reply; 12+ messages in thread
From: Xavier Roche @ 2020-09-16 12:14 UTC (permalink / raw)
  To: libc-help

Dear glibc enthusiasts,

We at Algolia have been experiencing really strong memory usage
regressions with glibc 2.31 (we are currently using an older glibc
2.23) in some use-cases involving high workload on medium-size systems
(128GB of RAM, 12 core)

By strong regression, it means an order of magnitude between 2 and 10
in terms of memory usage compared to the 2.23 release.

Looking at NEWS
(https://sourceware.org/git/?p=glibc.git;a=blob_plain;f=NEWS;hb=HEAD)
the only possible major change could be the 2.26's per-thread cache,
but this is a pure guess not backed by any proof.

Investigations show that calling malloc_trim(0) "solves" the memory
consumption issue, which tends to hint at a trimming strategy issue in
existing heap pools.

In the example below, we could reduce resident size from ~120GB to
~9GB by calling malloc_trim(). We neither use any specific mallopt
setting nor any GLIBC_TUNABLES environment tuning.

We have nearly 100 heaps, and some of them have really high free block
usage (15GB):

Interesting parts extracted from malloc_info():

<heap nr="87">
<sizes>
... ( skipped not so interesting part )
  <size from="542081" to="67108801" total="15462549676" count="444"/>
  <unsorted from="113" to="113" total="113" count="1"/>
</sizes>
<total type="fast" count="0" size="0"/>
<total type="rest" count="901" size="15518065028"/>
<system type="current" size="15828295680"/>
<system type="max" size="16474275840"/>
<aspace type="total" size="15828295680"/>
<aspace type="mprotect" size="15828295680"/>
<aspace type="subheaps" size="241"/>
</heap>

The global stats seem to hint 137MB of free memory not reclaimed by
the system (if "rest" are really free blocks, which I only guessed)

<total type="fast" count="551" size="35024"/>
<total type="rest" count="511290" size="137157559274"/>
<total type="mmap" count="12" size="963153920"/>
<system type="current" size="139098812416"/>
<system type="max" size="197709660160"/>
<aspace type="total" size="139098812416"/>
<aspace type="mprotect" size="140098441216"/>

We tried playing with glibc.malloc.trim_threshold (with values as low
as 1048576) or glibc.malloc.mmap_threshold, but it did not really
help.

Is this behavior something expected ?

I'm ready to do any suggested tuning or extract any relevant data if
needed! (notably, the full malloc_info() XML dump)

Thanks for any hint,

-- 
Xavier Roche -
xavier.roche@algolia.com

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Abnormal memory usage with glibc 2.31 related to thread cache and trimming strategy ?
  2020-09-16 12:14 Abnormal memory usage with glibc 2.31 related to thread cache and trimming strategy ? Xavier Roche
@ 2020-09-18 16:44 ` Konstantin Kharlamov
  2020-09-19 14:00   ` Abnormal memory usage with glibc 2.31 related to " Xavier Roche
  0 siblings, 1 reply; 12+ messages in thread
From: Konstantin Kharlamov @ 2020-09-18 16:44 UTC (permalink / raw)
  To: Xavier Roche, libc-help

On Wed, 2020-09-16 at 14:14 +0200, Xavier Roche via Libc-help wrote:
> Dear glibc enthusiasts,
>
> We at Algolia have been experiencing really strong memory usage
> regressions with glibc 2.31 (we are currently using an older glibc
> 2.23) in some use-cases involving high workload on medium-size systems
> (128GB of RAM, 12 core)
>
> By strong regression, it means an order of magnitude between 2 and 10
> in terms of memory usage compared to the 2.23 release.
>
> Looking at NEWS
> (https://sourceware.org/git/?p=glibc.git;a=blob_plain;f=NEWS;hb=HEAD)
> the only possible major change could be the 2.26's per-thread cache,
> but this is a pure guess not backed by any proof.
> …

The per-thread cache is called `tcache`. Looking at GLIBC tunables¹ I see
there're 3 tunables related to tcache. One of them called
`glibc.malloc.tcache_count`: "If set to zero, the per-thread cache is
effectively disabled."

So what you can do is to play with this tunable to see if it affects the
behavior you see.

1: 
https://www.gnu.org/software/libc/manual/html_node/Memory-Allocation-Tunables.html



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Abnormal memory usage with glibc 2.31 related to trimming strategy ?
  2020-09-18 16:44 ` Konstantin Kharlamov
@ 2020-09-19 14:00   ` Xavier Roche
  2020-09-20 20:45     ` Konstantin Kharlamov
  0 siblings, 1 reply; 12+ messages in thread
From: Xavier Roche @ 2020-09-19 14:00 UTC (permalink / raw)
  To: libc-help

Hi Konstantin,

On Fri, Sep 18, 2020 at 6:45 PM Konstantin Kharlamov <hi-angel@yandex.ru> wrote:
> `glibc.malloc.tcache_count`: "If set to zero, the per-thread cache is
> effectively disabled."
> So what you can do is to play with this tunable to see if it affects the
> behavior you see.

Thanks for pointing me to this setting I did indeed miss.

Unfortunately it did not improve the situation
(GLIBC_TUNABLES=glibc.malloc.tcache_count=0), as the process started
to slowly grow as usual, now eating more than 50GB of released memory.

Looking at the malloc.c code, it seem that the thread cache can not be
the culprit: blocks are rather small (governed by get_max_fast()
limit)

So this is probably related to another issue; possibly the number of heaps ?

The only workaround for now seems to be to call malloc_trim(0) on a
regular (ie. with a period of 30 seconds) basis. But this is a rather
unfortunate workaround.

Regards,

-- 
Xavier Roche -

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Abnormal memory usage with glibc 2.31 related to trimming strategy ?
  2020-09-19 14:00   ` Abnormal memory usage with glibc 2.31 related to " Xavier Roche
@ 2020-09-20 20:45     ` Konstantin Kharlamov
  2020-09-21  8:39       ` Xavier Roche
  0 siblings, 1 reply; 12+ messages in thread
From: Konstantin Kharlamov @ 2020-09-20 20:45 UTC (permalink / raw)
  To: Xavier Roche, libc-help

On Sat, 2020-09-19 at 16:00 +0200, Xavier Roche via Libc-help wrote:
> Hi Konstantin,
> 
> On Fri, Sep 18, 2020 at 6:45 PM Konstantin Kharlamov <hi-angel@yandex.ru>
> wrote:
> > `glibc.malloc.tcache_count`: "If set to zero, the per-thread cache is
> > effectively disabled."
> > So what you can do is to play with this tunable to see if it affects the
> > behavior you see.
> 
> Thanks for pointing me to this setting I did indeed miss.
> 
> Unfortunately it did not improve the situation
> (GLIBC_TUNABLES=glibc.malloc.tcache_count=0), as the process started
> to slowly grow as usual, now eating more than 50GB of released memory.
> 
> Looking at the malloc.c code, it seem that the thread cache can not be
> the culprit: blocks are rather small (governed by get_max_fast()
> limit)
> 
> So this is probably related to another issue; possibly the number of heaps ?
> 
> The only workaround for now seems to be to call malloc_trim(0) on a
> regular (ie. with a period of 30 seconds) basis. But this is a rather
> unfortunate workaround.
> 
> Regards,
> 

Either way, this looks like a situation that deserves its own page on bugzilla :) So please report a bug about that.


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Abnormal memory usage with glibc 2.31 related to trimming strategy ?
  2020-09-20 20:45     ` Konstantin Kharlamov
@ 2020-09-21  8:39       ` Xavier Roche
  2020-09-21  8:52         ` Florian Weimer
  0 siblings, 1 reply; 12+ messages in thread
From: Xavier Roche @ 2020-09-21  8:39 UTC (permalink / raw)
  To: libc-help

Hi,

On Sun, Sep 20, 2020 at 10:45 PM Konstantin Kharlamov
<hi-angel@yandex.ru> wrote:
> > So this is probably related to another issue; possibly the number of heaps ?

Just a note: this is unfortunately not related to the number of heaps.
While 2.31 has more heaps than 2.23 in our setup (96 heaps in 2.31 vs.
57 heaps in 2.23), reducing the number of heaps to the number of cores
(ie. 12 heaps, as suggested on a developper's blog:
https://codearcana.com/posts/2016/07/11/arena-leak-in-glibc.html) does
not appear to reduce the wasted space, with a steady rate of 10GB/hour
space lost.

> Either way, this looks like a situation that deserves its own page on bugzilla :) So please report a bug about that.

I am collecting all data and experiments done, but I don't have a
reproducible case for now, and I'm afraid the bug will be hard to
handle without a clear case ?

Note that the issue seems a bit like
https://sourceware.org/bugzilla/show_bug.cgi?id=15321, but clearly
something different happened between 2.23 and 2.31, as the order of
magnitude is totally different.

Anyway if this still makes sense to you I'll fill a bug once I have
all bits collected!


Regards,

-- 
Xavier Roche -

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Abnormal memory usage with glibc 2.31 related to trimming strategy ?
  2020-09-21  8:39       ` Xavier Roche
@ 2020-09-21  8:52         ` Florian Weimer
  2020-09-21  9:00           ` Xavier Roche
  2020-09-21 11:18           ` Xavier Roche
  0 siblings, 2 replies; 12+ messages in thread
From: Florian Weimer @ 2020-09-21  8:52 UTC (permalink / raw)
  To: Xavier Roche via Libc-help; +Cc: Xavier Roche

* Xavier Roche via Libc-help:

> Just a note: this is unfortunately not related to the number of heaps.
> While 2.31 has more heaps than 2.23 in our setup (96 heaps in 2.31 vs.
> 57 heaps in 2.23), reducing the number of heaps to the number of cores
> (ie. 12 heaps, as suggested on a developper's blog:
> https://codearcana.com/posts/2016/07/11/arena-leak-in-glibc.html) does
> not appear to reduce the wasted space, with a steady rate of 10GB/hour
> space lost.

Then this looks like a distinct issue.

Do you use any of the memalign functions or C++'s aligned new operator?

Thanks,
Florian
-- 
Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Abnormal memory usage with glibc 2.31 related to trimming strategy ?
  2020-09-21  8:52         ` Florian Weimer
@ 2020-09-21  9:00           ` Xavier Roche
  2020-09-21  9:12             ` Florian Weimer
  2020-09-21 11:18           ` Xavier Roche
  1 sibling, 1 reply; 12+ messages in thread
From: Xavier Roche @ 2020-09-21  9:00 UTC (permalink / raw)
  To: Xavier Roche via Libc-help

Hi Florian,

On Mon, Sep 21, 2020 at 10:52 AM Florian Weimer <fweimer@redhat.com> wrote:
> Do you use any of the memalign functions or C++'s aligned new operator?

Not at all, neither aligned new (C++17) nor memalign. The only notable
use we have are shared mmap (read-only, shared, regular mmap on a
file) that may cause non-contiguous patterns ?


Regards,

-- 
Xavier Roche -

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Abnormal memory usage with glibc 2.31 related to trimming strategy ?
  2020-09-21  9:00           ` Xavier Roche
@ 2020-09-21  9:12             ` Florian Weimer
  0 siblings, 0 replies; 12+ messages in thread
From: Florian Weimer @ 2020-09-21  9:12 UTC (permalink / raw)
  To: Xavier Roche via Libc-help; +Cc: Xavier Roche

* Xavier Roche via Libc-help:

> Hi Florian,
>
> On Mon, Sep 21, 2020 at 10:52 AM Florian Weimer <fweimer@redhat.com> wrote:
>> Do you use any of the memalign functions or C++'s aligned new operator?
>
> Not at all, neither aligned new (C++17) nor memalign. The only notable
> use we have are shared mmap (read-only, shared, regular mmap on a
> file) that may cause non-contiguous patterns ?

No, it should not matter in this context.

Thanks,
Florian
-- 
Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Abnormal memory usage with glibc 2.31 related to trimming strategy ?
  2020-09-21  8:52         ` Florian Weimer
  2020-09-21  9:00           ` Xavier Roche
@ 2020-09-21 11:18           ` Xavier Roche
  2020-09-22 14:09             ` Xavier Roche
  1 sibling, 1 reply; 12+ messages in thread
From: Xavier Roche @ 2020-09-21 11:18 UTC (permalink / raw)
  To: Xavier Roche via Libc-help

Hi,

On Mon, Sep 21, 2020 at 10:52 AM Florian Weimer <fweimer@redhat.com> wrote:
> > Just a note: this is unfortunately not related to the number of heaps.
> > (ie. 12 heaps, as suggested on a developper's blog:

Just to add some small details that may be insightful: in the later
scenario, with 12 heaps, the total overall rest size was 100376946145
Bytes (~100GB), and the  biggest heap was 30GB (eight heaps ranging
from 1GB to 8GB, four heaps of 11GB, 13GB, 18GB, 30GB)

Memory spent in rest is always in the last bucket (from 529089 to 67108801)

The allocation pattern is the green curve on the following graph:
https://i.imgur.com/EhSk7O5.png

The blue/orange graphs are the exact same hardware, with glibc 2.23;
orange having a malloc_trim on a regular basis

Machines are 128GB (process is triggering OOM at some point)

Extract from malloc_debug: (full XML available if this can help)

<heap nr="5">
<sizes>
... skipped not really interesting sizes before
  <size from="29233" to="32113" total="184902" count="6"/>
  <size from="32817" to="36577" total="105603" count="3"/>
  <size from="37585" to="40673" total="196709" count="5"/>
  <size from="40977" to="64241" total="1158806" count="22"/>
  <size from="68305" to="97905" total="1438354" count="18"/>
  <size from="102401" to="129697" total="1989249" count="17"/>
  <size from="132193" to="162193" total="2455889" count="17"/>
  <size from="166849" to="260993" total="7694133" count="37"/>
  <size from="266993" to="523441" total="24032844" count="60"/>
  <size from="529089" to="67108801" total="30253119614" count="926"/>
  <unsorted from="65" to="1089" total="11943" count="39"/>
</sizes>
<total type="fast" count="45" size="2992"/>
<total type="rest" count="1349" size="30331111220"/>
<system type="current" size="30505955328"/>
<system type="max" size="30917689344"/>
<aspace type="total" size="30505955328"/>
<aspace type="mprotect" size="30505955328"/>
<aspace type="subheaps" size="464"/>
</heap>

Let me know if anything would help investigate (including dumping
glibc internals live), and/or if a ticket is relevant (even without
trivially reproducible case)

Regards,

--
Xavier Roche -

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Abnormal memory usage with glibc 2.31 related to trimming strategy ?
  2020-09-21 11:18           ` Xavier Roche
@ 2020-09-22 14:09             ` Xavier Roche
  2020-09-24 19:13               ` Carlos O'Donell
  0 siblings, 1 reply; 12+ messages in thread
From: Xavier Roche @ 2020-09-22 14:09 UTC (permalink / raw)
  To: Xavier Roche via Libc-help

Hi,

On Mon, Sep 21, 2020 at 1:18 PM Xavier Roche <xavier.roche@algolia.com> wrote:
> Just to add some small details that may be insightful: in the later
> scenario, with 12 heaps, the total overall rest size was 100376946145

To be correct, even with GLIBC_TUNABLES=glibc.malloc.arena_max=1, the
pool (which is the main sbrk() pool, I suppose) grows.
In the later example, I could count 6806 blocks up to 35326884838 Bytes.

The interesting part is that each pool does not seem to consume more
than the maximum memory allocated at some point in the process (see
green cuve: https://i.imgur.com/Mig7Vek.png).

Could it be a threshold that is pushed to an upper limit ?


<malloc version="1">
<heap nr="0">
<sizes>
  <size from="49" to="49" total="441" count="9"/>
  <size from="113" to="113" total="34613369" count="306313"/>
  <size from="129" to="129" total="174795" count="1355"/>
  <size from="193" to="193" total="193" count="1"/>
  <size from="209" to="209" total="209" count="1"/>
  <size from="241" to="241" total="6266" count="26"/>
  <size from="257" to="257" total="1542" count="6"/>
  <size from="273" to="273" total="2730" count="10"/>
  <size from="289" to="289" total="289" count="1"/>
  <size from="353" to="353" total="353" count="1"/>
  <size from="449" to="449" total="449" count="1"/>
  <size from="465" to="465" total="465" count="1"/>
  <size from="481" to="481" total="481" count="1"/>
  <size from="513" to="513" total="513" count="1"/>
  <size from="545" to="545" total="545" count="1"/>
  <size from="657" to="657" total="657" count="1"/>
  <size from="1009" to="1009" total="1009" count="1"/>
  <size from="1041" to="1041" total="27066" count="26"/>
  <size from="1169" to="1169" total="1169" count="1"/>
  <size from="1601" to="1649" total="576371" count="355"/>
  <size from="1665" to="1713" total="602773" count="357"/>
  <size from="1729" to="1777" total="599126" count="342"/>
  <size from="1793" to="1841" total="510185" count="281"/>
  <size from="1857" to="1905" total="590058" count="314"/>
  <size from="1921" to="1969" total="507509" count="261"/>
  <size from="1985" to="2033" total="527687" count="263"/>
  <size from="2049" to="2049" total="145479" count="71"/>
  <size from="2337" to="2353" total="162053" count="69"/>
  <size from="2369" to="2417" total="538753" count="225"/>
  <size from="2433" to="2481" total="608664" count="248"/>
  <size from="2497" to="2545" total="585288" count="232"/>
  <size from="2561" to="2609" total="566843" count="219"/>
  <size from="2625" to="2673" total="598210" count="226"/>
  <size from="2689" to="2737" total="637595" count="235"/>
  <size from="2753" to="2801" total="572606" count="206"/>
  <size from="2817" to="2865" total="590912" count="208"/>
  <size from="2881" to="2929" total="597982" count="206"/>
  <size from="2945" to="2993" total="445638" count="150"/>
  <size from="3009" to="3057" total="497748" count="164"/>
  <size from="3073" to="3121" total="548737" count="177"/>
  <size from="3137" to="3569" total="3185467" count="955"/>
  <size from="3585" to="4081" total="3172670" count="830"/>
  <size from="4097" to="4097" total="532610" count="130"/>
  <size from="11153" to="12273" total="6874748" count="588"/>
  <size from="12289" to="16369" total="27765895" count="1975"/>
  <size from="16385" to="20465" total="23269372" count="1260"/>
  <size from="20481" to="24561" total="24447050" count="1098"/>
  <size from="24577" to="28657" total="22167173" count="837"/>
  <size from="28673" to="32753" total="22671238" count="742"/>
  <size from="33985" to="36849" total="14771665" count="417"/>
  <size from="36865" to="40945" total="20814264" count="536"/>
  <size from="40961" to="65521" total="111657063" count="2135"/>
  <size from="65553" to="98241" total="149901396" count="1860"/>
  <size from="98305" to="131041" total="144180545" count="1265"/>
  <size from="131073" to="163825" total="146380250" count="1002"/>
  <size from="163857" to="262113" total="330656633" count="1609"/>
  <size from="262161" to="524177" total="753145332" count="2068"/>
  <size from="524913" to="1162873249" total="35326884838" count="6806"/>
  <unsorted from="7041" to="7041" total="7041" count="1"/>
</sizes>
<total type="fast" count="0" size="0"/>
<total type="rest" count="338681" size="37178462424"/>
<system type="current" size="37812953048"/>
<system type="max" size="38723010520"/>
<aspace type="total" size="37812953048"/>
<aspace type="mprotect" size="37812953048"/>
</heap>
<total type="fast" count="0" size="0"/>
<total type="rest" count="338681" size="37178462424"/>
<total type="mmap" count="0" size="0"/>
<system type="current" size="37812953048"/>
<system type="max" size="38723010520"/>
<aspace type="total" size="37812953048"/>
<aspace type="mprotect" size="37812953048"/>
</malloc>


Regards,

-- 
Xavier Roche -

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Abnormal memory usage with glibc 2.31 related to trimming strategy ?
  2020-09-22 14:09             ` Xavier Roche
@ 2020-09-24 19:13               ` Carlos O'Donell
  2020-09-25  7:53                 ` Xavier Roche
  0 siblings, 1 reply; 12+ messages in thread
From: Carlos O'Donell @ 2020-09-24 19:13 UTC (permalink / raw)
  To: Xavier Roche, libc-help

On 9/22/20 10:09 AM, Xavier Roche via Libc-help wrote:
> Hi,
> 
> On Mon, Sep 21, 2020 at 1:18 PM Xavier Roche <xavier.roche@algolia.com> wrote:
>> Just to add some small details that may be insightful: in the later
>> scenario, with 12 heaps, the total overall rest size was 100376946145
> 
> To be correct, even with GLIBC_TUNABLES=glibc.malloc.arena_max=1, the
> pool (which is the main sbrk() pool, I suppose) grows.
> In the later example, I could count 6806 blocks up to 35326884838 Bytes.

The glibc algorithms are considered "heap based" rather than "page based."
The allocations that are temporally close to each other are close to
each other on the virtual heap (made up of possibly many logical heaps
linked together as an arena).

The implication is that you can get ratcheting in memory usage if you have
a producer consumer model that effectively keeps growing the "top" of the
heap by somehow avoiding the reuse of the chunks on the free lists.

At some point the algorithm may become stable. At some point you may
have enough chunks in the free lists to satisfy any of the workload requests.
The question is: Where is that stability point? The stability point depends
on the exact workload and the interaction with the algorithm.

Or allocations that last a long time will prevent the "top" of the heap
from freeing down below that point. So you can ratchet up the top
of the heap and keep moving a long lasting allocation forward with the top
of the heap and so prevent any allocations from being freed down below that
point on the virtual heap. If you can't manage those long-lived allocations
then you may need to call malloc_trim() periodically to walk the free lists
and free the coalesced pages rather than relying on the heap to free down.

What we really need is a good heap dumper to visualize what your heap status
is and what is consuming space.

For similar problems we developed a tracer/simulator:
https://pagure.io/glibc-malloc-trace-utils, but we need the equivalent
dumper visualizer. The tracer can tell you if your usage is based
on actual API demand or not.

-- 
Cheers,
Carlos.


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Abnormal memory usage with glibc 2.31 related to trimming strategy ?
  2020-09-24 19:13               ` Carlos O'Donell
@ 2020-09-25  7:53                 ` Xavier Roche
  0 siblings, 0 replies; 12+ messages in thread
From: Xavier Roche @ 2020-09-25  7:53 UTC (permalink / raw)
  To: libc-help

Hi Carlos!,

On Thu, Sep 24, 2020 at 9:14 PM Carlos O'Donell <carlos@redhat.com> wrote:
> The implication is that you can get ratcheting in memory usage if you have
> a producer consumer model that effectively keeps growing the "top" of the
> heap by somehow avoiding the reuse of the chunks on the free lists.

We do have quite aggressive memory usage for short spans, which might
explain the issue.

Note that after several days, with only one pool, the memory
consumption remains stable at 50GB (for 3GB of memory really used),
which was the peak memory consumed at some point in the process
history.

I tried to look at the glibc allocator history (git log
--format=oneline glibc-2.23..glibc-2.31 -- malloc/malloc.c) but the
code has changed quite a bit between the two versions, so there are
many candidates for this regression.

> What we really need is a good heap dumper to visualize what your heap status
> is and what is consuming space.

Could an adapted version of malloc_trim do the trick ? We "just" need
to dump more details basically (instead of counting blocks, we could
dump start and size). Or maybe playing with some gdb scripting could
do the trick alternatively.

I'd be happy to provide any information if anyone has some suggestions.

> For similar problems we developed a tracer/simulator:
> https://pagure.io/glibc-malloc-trace-utils, but we need the equivalent
> dumper visualizer. The tracer can tell you if your usage is based
> on actual API demand or not.

If it can provide useful information, I can definitely try to produce
a trace. But the amount of logs may be huge (the issue is visible
after ten hours of intensive usage)

Thanks for the useful info!

-- 
Xavier Roche -

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2020-09-25  7:53 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-16 12:14 Abnormal memory usage with glibc 2.31 related to thread cache and trimming strategy ? Xavier Roche
2020-09-18 16:44 ` Konstantin Kharlamov
2020-09-19 14:00   ` Abnormal memory usage with glibc 2.31 related to " Xavier Roche
2020-09-20 20:45     ` Konstantin Kharlamov
2020-09-21  8:39       ` Xavier Roche
2020-09-21  8:52         ` Florian Weimer
2020-09-21  9:00           ` Xavier Roche
2020-09-21  9:12             ` Florian Weimer
2020-09-21 11:18           ` Xavier Roche
2020-09-22 14:09             ` Xavier Roche
2020-09-24 19:13               ` Carlos O'Donell
2020-09-25  7:53                 ` Xavier Roche

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).