I'll clarify this. I'm not saying you shouldn't ever check return values, that's obviously not the right thing to do. And of course there are exceptions to the general rule. If you're allocating a large chunk of memory and there's a reasonable expectation that it could fail, that should be reported, of course.
In the general case, however, if allocating 100 bytes fails, reporting that error is also likely to fail. An actual memory allocation failure on a modern computer running a modern OS is a very rare and very bad situation. It's rarely recoverable.
It's not bad to handle allocation failures, but in the vast majority of cases it's very unreasonable to do so. You can write code for it if you want, have fun.
And just to be completely clear, I am ONLY talking about calls to malloc, new, realloc, etc. NOT to OS pools or anything like that. Obviously, if you allocate a 4Mb buffer for something (or the OS does for you), you expect that you might run out. This is ONLY in regards to calls to lower level heap allocators.
I don't think you'll find any experienced programmer recommending that you always check the return from malloc. That's completely absurd. There are always exceptions to the rule, however.
> In the general case, however, if allocating 100 bytes fails, reporting that error is also likely to fail. An actual memory allocation failure on a modern computer running a modern OS is a very rare and very bad situation. It's rarely recoverable.
I call BS on this. First of all, it's not the 100 byte allocation that is likely to fail; chances are it's going to be bigger than 100 bytes and the 100 byte allocation will succeed. (Though that is not 100% either.) Second, the thing you're going to do in response to an allocation failure? You're going to unwind the stack, which will probably lead to some temporary buffers being freed. That already gets you more space to work with. (It's also untrue that you can't report errors without allocating memory but that's a whole other story...)
I suspected when I wrote in this thread that I'd see some handwavy nonsense about how it's impossible to cleanly recover from OOM, but the fact is I've witnessed it happening. I think some people would just rather tear down the entire process than have to think about handling errors, and they make up these falsities about how there's no way to do it in order to self justify... Although, when I think back to a time in which I shared your attitudes, I think the real problem was that I hadn't yet seen it being done well.
If you have time, can you expound on this? Is there, perhaps, an open source project that handles NULL returns from malloc in this way you could point me to?
My first instinct is to say look at something kernel-related. If an allocation fails, taking down the entire system is usually not an option (or not a good one anyway). Searching http://lxr.linux.no/ for "kmalloc" you see a lot of callers handling failure.
Adding a bit more after the fact: most well-written libraries in C are also like this. It's not a library's business to decide to exit the process at any time. The library doesn't know if it's some long-running process that absolutely must keep going, for example.
I'm not sure what you imagine when you say "modern computer running a modern OS". Does this not include anything but desktop PCs and laptops? Because phones and tablets have some rather nasty memory limits for applications to deal with, which developers run into frequently.
The space I work in deals with phones and tablets, as well as other embedded systems (TVs, set-top boxes, etc.) that tend to run things people think of as "modern" (recentish Linux kernels, userlands based on Android or centered around WebKit), while having serious limits on memory and storage. My desktop calendar application uses more memory than we have available on some of these systems.
In these environments, it is essential to either avoid any possibility of memory exhaustion, or have ways to gracefully deal with the inevitable. This is often quite easy in theory -- several megabytes of memory might be used by a cached data structure that can easily be re-loaded or re-downloaded at the cost of a short wait when the user backs out of whatever screen they're in.
But one of the consequences of this cavalier attitude to memory allocation is that even in these constrained systems, platform owners have mandated apps sit atop an ever-growing stack of shit that makes it all but impossible for developers to effectively understand and manage memory usage.
In the general case, however, if allocating 100 bytes fails, reporting that error is also likely to fail. An actual memory allocation failure on a modern computer running a modern OS is a very rare and very bad situation. It's rarely recoverable.
It's not bad to handle allocation failures, but in the vast majority of cases it's very unreasonable to do so. You can write code for it if you want, have fun.
And just to be completely clear, I am ONLY talking about calls to malloc, new, realloc, etc. NOT to OS pools or anything like that. Obviously, if you allocate a 4Mb buffer for something (or the OS does for you), you expect that you might run out. This is ONLY in regards to calls to lower level heap allocators.
I don't think you'll find any experienced programmer recommending that you always check the return from malloc. That's completely absurd. There are always exceptions to the rule, however.