Hacker Newsnew | past | comments | ask | show | jobs | submit | fprawn's commentslogin

Memory allocation failures are virtually non-existent in modern desktop computers. Good practice is to not test return values from malloc, new, etc.

Memory can be allocated beyond RAM size, so by the time a failure occurs your program really should crash and return its resources.

Embedded systems have fewer resources and some will not have virtual memory and so the situation will be different. But unless you know better, the best practice is still to not check the return from allocators. Running out of memory in a program intended for an embedded platform should be considered a bug.


I respectfully disagree with this. Ignoring return values is _not_ good practice. It is a slippery slope to bad software. By catching these memory errors, a program has the chance to properly teardown and report a message to the user instead of crashing.


Ugh. I would much rather know that the process died because of allocation failure than try to figure out why some code is trying to write to a random null pointer as these are two very different types of bugs.


I'm having a hard time picturing a situation where it would be tough to figure out. Typically you allocate memory to use it right after. errno will be set too.

Of course, there is no reason to not do all your allocation through a wrapper function which does check and abort on failure. I think the point was that surviving malloc failures is a dubious approach - instead go all in, or if it's a long-running service, provide a configurable max memory cap and assume that much will be available.


In one scenario, the process writes "Out of memory." or similar to stderr. In the other, it segfaults. Maybe.

I'll take the clear error message.


In the case of my day job (CCTV application 80% C#, 20% C and C++) writing to a bad pointer will get reported as an AccessViolationException with no hope of getting a dump or a stack trace of the native code. An allocation failure will get translated into an OutOfMemoryException and typically includes stats of what is consuming RAM.


I'll clarify this. I'm not saying you shouldn't ever check return values, that's obviously not the right thing to do. And of course there are exceptions to the general rule. If you're allocating a large chunk of memory and there's a reasonable expectation that it could fail, that should be reported, of course.

In the general case, however, if allocating 100 bytes fails, reporting that error is also likely to fail. An actual memory allocation failure on a modern computer running a modern OS is a very rare and very bad situation. It's rarely recoverable.

It's not bad to handle allocation failures, but in the vast majority of cases it's very unreasonable to do so. You can write code for it if you want, have fun.

And just to be completely clear, I am ONLY talking about calls to malloc, new, realloc, etc. NOT to OS pools or anything like that. Obviously, if you allocate a 4Mb buffer for something (or the OS does for you), you expect that you might run out. This is ONLY in regards to calls to lower level heap allocators.

I don't think you'll find any experienced programmer recommending that you always check the return from malloc. That's completely absurd. There are always exceptions to the rule, however.


> In the general case, however, if allocating 100 bytes fails, reporting that error is also likely to fail. An actual memory allocation failure on a modern computer running a modern OS is a very rare and very bad situation. It's rarely recoverable.

I call BS on this. First of all, it's not the 100 byte allocation that is likely to fail; chances are it's going to be bigger than 100 bytes and the 100 byte allocation will succeed. (Though that is not 100% either.) Second, the thing you're going to do in response to an allocation failure? You're going to unwind the stack, which will probably lead to some temporary buffers being freed. That already gets you more space to work with. (It's also untrue that you can't report errors without allocating memory but that's a whole other story...)

I suspected when I wrote in this thread that I'd see some handwavy nonsense about how it's impossible to cleanly recover from OOM, but the fact is I've witnessed it happening. I think some people would just rather tear down the entire process than have to think about handling errors, and they make up these falsities about how there's no way to do it in order to self justify... Although, when I think back to a time in which I shared your attitudes, I think the real problem was that I hadn't yet seen it being done well.


If you have time, can you expound on this? Is there, perhaps, an open source project that handles NULL returns from malloc in this way you could point me to?


My first instinct is to say look at something kernel-related. If an allocation fails, taking down the entire system is usually not an option (or not a good one anyway). Searching http://lxr.linux.no/ for "kmalloc" you see a lot of callers handling failure.


Adding a bit more after the fact: most well-written libraries in C are also like this. It's not a library's business to decide to exit the process at any time. The library doesn't know if it's some long-running process that absolutely must keep going, for example.


I'm not sure what you imagine when you say "modern computer running a modern OS". Does this not include anything but desktop PCs and laptops? Because phones and tablets have some rather nasty memory limits for applications to deal with, which developers run into frequently.

The space I work in deals with phones and tablets, as well as other embedded systems (TVs, set-top boxes, etc.) that tend to run things people think of as "modern" (recentish Linux kernels, userlands based on Android or centered around WebKit), while having serious limits on memory and storage. My desktop calendar application uses more memory than we have available on some of these systems.

In these environments, it is essential to either avoid any possibility of memory exhaustion, or have ways to gracefully deal with the inevitable. This is often quite easy in theory -- several megabytes of memory might be used by a cached data structure that can easily be re-loaded or re-downloaded at the cost of a short wait when the user backs out of whatever screen they're in.

But one of the consequences of this cavalier attitude to memory allocation is that even in these constrained systems, platform owners have mandated apps sit atop an ever-growing stack of shit that makes it all but impossible for developers to effectively understand and manage memory usage.


That is the open path to security exploits.


See xmalloc [1] and friends, courtesy of the Git project.

[1]: https://github.com/git/git/blob/master/wrapper.c


SEEKING WORK - San Francisco Bay Area or Remote

10+ years C/C++ video game development, scientific visualization, and real time image processing.

Polyglot programmer that works mainly in C/C++, Python, Perl, and PHP on Android/iOS, Windows/Mac/Linux, embedded, game consoles, backend servers.

Looking to help optimize low level code, work on 2D or 3D visualizations and games. I work fast, write clean code and can help mentor junior developers.

contact: ycj@linuxleverage.com


This is amazing. He's come up with completely new ways to do graphics on modern CPUs. This is genuinely impressive stuff and the breadth of it is mind blowing.

It's upsetting how dismissive of his work he can be, he deserves to be proud of this, I certainly would be.


Couldn't you write up some of the more impressive parts? If he can't do it for himself, and it's really that mind-blowing, it'd be a shame to let it go ignored.


There's a lesson here about how sometimes it's okay to fit your data to your code. This was, afterall, in php for years without causing much, if any, problems for end users.

I guess there won't be much agreement here on HN, but this deserves to be recognized as an amusing and clever solution to a problem. Even if the problem is self created.

PHP is great for doing quick and dirty dynamic web pages. So what if it doesn't scale out to a million line program?


> PHP is great for doing quick and dirty dynamic web pages.

And a lot more besides (subtle plug for http://www.phpbeyondtheweb.com )

> So what if it doesn't scale out to a million line program

I've seen more that one PHP codebase of that size, happily (and maintainably) plodding along.


Except this was no acceptable solution. Instead of using a better hashing algo than strlen(!), the function names was chosen by length, leading to one of the largest and most criticised flaws of PHP: the inconsistent function naming.


This is really neat, but seems to neglect an alternative (and just as simple) method. Instead of traversing through each pixel of the source buffer, traverse through each pixel of the destination buffer sampling the correct pixels from the source. Using different sampling methods results in various qualities, and efficiencies in cpu use and memory access.

This method is as simple to code and doesn't suffer from the missing-pixel aliasing problem of the simple method of the article, and is also capable of higher quality results than the shearing method.


You have to traverse sqrt(2) times more pixels than in the original method unless you somehow know where the borders of the destination square are ( additional computation ).


This is true only if the source image is smaller than the destination image. In the article, the source image is shown as the same size as the destination, and includes a large white border that can be clipped without the inner image being affected.

In the event that the destination buffer is much larger than the source, that additional computation is trivial (it's the same calculation that's already being done for each and every pixel). As it only needs to be additionally done on the corners, not per-pixel, the additional time spent should be quite minimal.

The shearing method in the article is genuinely clever and totally cool, but I just can't shake the feeling that even on 1980s hardware, this method would be better. On modern hardware, there's no question, it's still used to this day. Nowhere near as cool, though.


Could you post some links explaining your method?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: