On Mon, Jun 05, 2023 at 03:28:00AM +0300, Konstantin Kharlamov wrote: > Hi, just 2 theoretic questions: > > 1. I know that calling malloc() only allocates a virtual memory, not a real one. It's kind of memory you can allocate Terabytes of, which some apps do (e.g. on my system electron has 1.1T allocated). It will only turn into a real memory by the kernel upon the app accessing it. But then, assuming OOM-killer is disabled, what happens if I try to access such virtual memory thus forcing it to turn into a real one, but the system is out of real memory ATM? > 2. Is it unrealistic to expect ENOMEM from `malloc()`? That is because α) most systems have OOM-killer enabled, so instead of ENOMEM some app will get killed β) In absence of OOM-killer you'll get virtual memory successfully allocated, which returns us to 1. > > P.S.: I asked 1st question on #glibc OFTC on Saturday as well, but got no reply still. So decided to give a try to the ML. I guess this is more an operating system question that a libc one. Since you are talking about the OOM killer, I further guess that your context is Linux. In this case, your magic keyword is "memory overcommitment". There is a setting for that, see for example here: https://www.kernel.org/doc/html/v5.1/vm/overcommit-accounting.html Cheers -- t