From what I understand C10K was mostly a RAM limitation. As system ram got bigger the limit has naturally risen. Here are some guys hitting a million in Erlang.
From my understanding, the C10K concept is more a thread problem than a RAM limitation. It happens when each request create its own thread on server (as in Apache or IIS). Computer have problem to run 10,000 threads in same times: too much time is consumed by the OS to find the one thread that is not blocked/locked or waiting for datadabase results.
Node.JS solve this problem by doing all in Asynchronous way (in only one thread).
GO solve this pb by doing clever threads (goroutine).
Functional programming seems to be very good on this issue (no need of lock mecanism and light thread).
netty is in the dev dependencies, but for benchmarking only I would guess.
[edit]
This project is one of the best thing that happened in the web area in clojure recently imho. Not only it is a game changer in performance/resource use, but it makes websocket and async in general trivial to use and actually production ready (same goes for its client).
Some would mention Pedestal but it forces you to learn quite a few abstractions and is totally alien compared to the rest of the clojure web ecosystem and also only supports a single async "protocol" (SSE). But pedestal is about a full stack experience, integration with the client etc, so it certainly will attract other users.
Ah, interesting, I should have looked deeper. Well they're both almost certainly based on java NIO then, which once again leads us to epoll and friends.
10k isnt really a limit. In linux you should consider it being about 100 for every 1MB of ram (10%) of fs.file-max. 1 million isn't really hard nowadays
Netty has become the standard library for NIO on the JVM and is used widely by programmers in various JVM languages. It's also refreshing to see high performance NIO systems written without using Netty for a change.
The Whatsapp 2 million connection result was on a 24 core box with a ton of RAM. The Clojure benchmark was pulled off on a 4-core desktop. Also, Whatsapp connections are mostly idle, only a small fraction of users would be sending a message in any given second.
Edit: It looks like the test was run only on localhost, which also means that it artificially removes much of the processing and actual network I/O factors from the equation.
Then what are we really measuring? Is it how many connections we get can get on as small a machine as possible? If so then I am sure I can get 600K connections with Erlang in a smaller system. At least running a benchmark.
There is one very significant difference and that is that the Clojure benchmark was just a benchmark while for WhatsApp it was their actual system doing something real. They have mentioned to me that they have managed to push their system to 3M connections but not in production.
This does not measure 600k concurrent active connections because each client only sends a request once every 5-45s. To see how many concurrent active connections you can support you need to understand how much RAM each connection consumes and how much RAM is available to your application.
Y'know... that little "K" abbreviation on the end is a pretty important detail. I skimmed the article about three times, thinking to myself: "600 users... Really? Big deal."
It wouldn't hurt to actually emphasize "600,000" by explicitly expressing all of the zeroes.
"That little K abbreviation" is kind of a common thing... I'm not sure what your experience is, but in order to understand a lot of what gets discussed on the web in terms of performance, you'll need to improve your attention to detail.