Hacker News new | past | comments | ask | show | jobs | submit login

OOOOOOO, something I actually know!

> Now why would you want to know the committed memory over the actual physical RAM in use?

Because in Windows, committed size is relative to physical size. You can commit a lot more than RAM, but watch your page-file grow.

malloc() can fail on Windows for this reason. This is not the same on Linux or any of the BSD's I've tried. :)

I experienced/discovered this August last year.. sometimes understanding a lot about Linux can make you blind to the architectural differences Windows has.

I wrote some cross-platform CPP to show it[0]

[0]: https://gist.github.com/dijit/cb2caa1a40d48e03613f5af0e518d6...




> This is not the same on Linux

Linux lets you choose an overcommit policy. https://www.kernel.org/doc/Documentation/vm/overcommit-accou...


You can choose an overcommit policy on Linux, but most library developers on Linux have chosen the default and regularly allocate wide swaths they don't intend to use.

This is a real pain when moving an application from FreeBSD to Linux, as effective limits on memory are lost (ulimit set at ~90% of ram results in a malloc failure and a clean crashdump rather than death by thrashing, or an untrapable oom kill).

There could maybe be a middle ground where malloc would allocate large chunks of address space for ease of administration, and then ask the OS to commit those pages in smaller chunks as needed. Often, there's not much a lot you can do when allocation fails, but it's way more actionable if the failure is returned from a syscall vs failing when you write to an unbacked page, which could happen basically anywhere in your program.


This doesn't occur if you memory map a file though (barring certain flags that you can set as stated by quotemstr below)

You can legitimately have Windows stating many GB's of committed RAM without actually using that RAM and it's not using the systems pagefile/swap. It's also common for this to occur. Pretty much every program capable of opening large files (GB+) in a non-sequential fashion does this.


Hm, maybe I wasn't clear. It's not actually /using/ the memory when it's committed.

But the sum of your committed memory across all applications must exist in some form on the host system.

So, for example it's a common performance optimisation to double the amount of allocated space when you grow anything in C++, what this means is that you're not actually using the space yet but malloc() and zeroing is kinda slow.

So, you have 128MB of ram which is your programs address space and you just doubled your array from 75MB to 150MB, well, that extra 22MB must exist. Even though you're only using 76MB.. even if the OS shows it as free. (which, it will)

Thems the rules, and I promise you that I have thoroughly tested this; as it was causing a really nice crash on my servers even though we had more than 50% of memory "free"


Memory Mapped files do exist in some form or another though. As the file itself. That's the point of memory mapping files. You can right now memory map every file on your computer. That's TB of files. There will be no physical RAM usage and no swap file usage unless you start to actually work with those files (at which point they will be paged in). This will show as 'committed memory' in task manager.

Your example above isn't memory mapping files. It's just allocating RAM. That does have to exist in RAM or the swap file. But that's not what 'committed memory' above shows. Which is the whole point. The column the article is telling people to use is misleading.


They're talking about mmap'ed files, not memory.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: