Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In my experience, the linux kernel handles no swap at all very badly, so you need a small amount.

Increasing the swap, which is the suggested solution, is however, a terrible idea. As soon as you hit high memory usage, your IO load will go through the roof, and everything will grind to a halt.

The solution here is separation of services - i.e. put Solr on a different box, so that if it spirals it doesn't take out other services.

The OOM killer is your friend for recovering from horrible conditions, but as soon as you hit it or swap, somethings gone wrong.



> In my experience, the linux kernel handles no swap at all very badly, so you need a small amount.

Why?

I'm pretty sure we disable swap at Google. Maybe swap was necessary back in the days when memory was really tight, but it seems like a terrible idea now. Especially since the scheduling is completely oblivious to swap AFAIK, which means that a heavily swapped system will spend most of its timeslices just swapping program code back into memory. It's the worst kind of thrashing.


You are right. The best solution is separation of services. But for a startup than runs 7-2 services like this – it’s a close call. You’ll often have to run 2-3 services together, else $100 * 7 machines is too much burn


It's not a problem running several services on the same box as soon as each of them is sized appropriately. What I suggest at least roughly calculate how much eg. RAM could each service use at peak time. This usage should be limited so that the sum of memory used by all services at peak time is less than the amount of RAM you've got on your server.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: