From: Brian Inglis <Brian.Inglis@SystematicSw.ab.ca>
To: cygwin@cygwin.com
Subject: Re: CRITICAL ls MEMORY LEAK
Date: Sun, 21 Feb 2021 11:05:23 -0700 [thread overview]
Message-ID: <2ce762f7-b136-279b-b355-f0ff00cb99e2@SystematicSw.ab.ca> (raw)
In-Reply-To: <003401d70864$cd3b3400$67b19c00$@gmail.com>
On 2021-02-21 08:18, Satalink via Cygwin wrote:
> I deal with a lot of very large files on a regular basis. I've noticed that
> when I delve into these directories using in mintty and issue the command ls
> -l (or ls -color=auto), a very large junk of memory is consumed. The
> memory leak seems to be proportionate to the number and size of files within
> the containing folder.
>
> To reproduce:
>
> generate or use a folder containing 50 (or more) 2G+ files.
>
> // In this demonstration, I a ran the command on a directory containing 143
> files ranging in size from 2GB to 5GB.
> $> free
> total used free shared buff/cache available
> Mem: 50276004 16465148 33810856 0 0 33810856
> Swap: 12058624 186468 11872156
> $> ls -l -color=auto
> . (contents displayed after some delay)
> $> free
> total used free shared buff/cache available
> Mem: 50276004 19844660 30431344 0 0 30431344
> Swap: 12058624 186460 11872164
> // After 10 consecutive executions of the 'ls -al --color=auto' command in
> this directory, ls has consumed 86% of my system's real memory.
> $> free
> total used free shared buff/cache available
> Mem: 50276004 43587560 6688444 0 0 6688444
> Swap: 12058624 301068 11757556
> // If I continue (usually unknowingly) my system will completely be depleted
> of resources to the point my mouse will barely respond to movement.
That number is just the amount of unused physical memory on the system, and will
go down as you use the system, because unused memory is wasted meory.
Better to use Windows utilities like Task Manager/Performance/Memory, Resource
Monitor/Memory, or MS/SysInternals rammap which give system relevant details.
You will probably find that a lot of your memory is in Standby which means it is
being used to memory map or cache files, and it should be released when needed.
Unfortunately Windows often can't release the memory as fast as programs want to
use it.
Just accessing files can cause AV/Defender to look at what you are doing, and
have AV and Search take a look in the files, which uses and ties up a bunch of
resources for a while.
You need to look a bit further for longer to decide if there are real issues,
and if so, where they are.
--
Take care. Thanks, Brian Inglis, Calgary, Alberta, Canada
This email may be disturbing to some readers as it contains
too much technical detail. Reader discretion is advised.
[Data in binary units and prefixes, physical quantities in SI.]
next prev parent reply other threads:[~2021-02-21 18:05 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-02-21 15:18 neal.garrett
2021-02-21 18:05 ` Brian Inglis [this message]
2021-02-22 20:12 ` Doug Henderson
2021-02-22 20:30 ` Brian Inglis
2021-02-22 21:50 ` Hans-Bernhard Bröker
2021-02-22 23:47 ` Brian Inglis
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2ce762f7-b136-279b-b355-f0ff00cb99e2@SystematicSw.ab.ca \
--to=brian.inglis@systematicsw.ab.ca \
--cc=cygwin@cygwin.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).