Hacker News new | past | comments | ask | show | jobs | submit login

> This is a 32-core machine with 208GB of memory. Sending a ping every 5 minutes at 10M connections was roughly the limit of what the server could handle

While the 10M figure is impressive, this doesn't sound practical or useful.




Yeah the original C10M formulation (from 2013) specified 8 cores, 64GB RAM, 10GB Eth and an SSD for:

- 10 million concurrent connections

- 10 gigabits/second

- 10 million packets/second

- 10 microsecond latency

- 10 microsecond jitter

- 1 million connections/second

However it was for raw connections, not for websockets.


Why isn't it practical? Maybe not in the cloud, but there are some hosting companies that offer quite powerful machines in the same ballpark.

Hetzner, a German hosting company, have some really good root servers, for about 120€/month you get a beast of a machine with 16 cores and 128gb of RAM. If you add some basic load balancing, you can achieve 10M for a really low price. Add something like Docker and you can get a PaaS-like setup, that handles millions of connections without breaking a sweat.


If you're going for C10M you can already forget about docker. For performance you really must be as close to the hardware as possible, which possibly means tweaking kernel.

Best example I know of squeezing as much as possible out of the bare metal servers would be StackOverflow, which runs of something in the ballpark of 20 servers(excluding replicas) IIRC.


You do know that processes in a Docker container are regular processes right? They just have a different namespace. They are as "close to the hardware" as all of your other processes. I'm not sure how you'd go about using your own TCP stack, but if you can do it with a normal process you can do it inside a Docker container.


When working with Docker, it is possible to get down to the metal as you mentioned.

Usually by bypassing the docker networking and using Host only network, but then you lose a lot of the benefits of containers in the first place.

For example, weave or calico networking layer on top— which add a fair bit of latency if your aiming for 10M connections— makes scaling containers quite easy

I would imagine that if you plan on using Docker in your infrastructure seriously, you are aiming for a multi host setup with many containers spread throughout— and can settle for 100k connections per container easily.


Because 1 packet every 5 minutes only comes out to about 30k packets per second, which is actually pretty small.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: