Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Say you have three unreliable programs that occasionally leak memory, spin CPU etc. (yes, it would be nice if all our programs were perfect, but they're not). It should be possible to run these three programs on the same server in such a way that the failure of one won't affect the others (so e.g. you have 3 servers, each runs one of each program, and you load balance between them, and you have some system that eventually detects when one of the instances fails. So as far as the outside world is concerned, all your programs are running reliably). At the moment, the most practical way to do this is to run 3 different VMs on each server, one for each program. Which is insane. There are some encouraging recent developments (e.g. docker/lxc), but it should be easy to do that kind of isolation purely at the OS level.


In what way does the shell builtin "ulimit" and a "while (true); do command; done" loop not suffice for this case?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: