It seems like it's really getting killed under the load. This also isn't the first node project demo I've seen deployed that gets destroyed by a bunch of people visiting it to check it out. Anyone have some tips for deploying Node in a manner that your server won't just get destroyed? What's the point of supporting 1000's of users in say, socket.io if most people's deployment schemes allow for a max of say, 250 concurrent connections?
Be sure to use Node's cluster API (http://nodejs.org/docs/v0.6.0/api/cluster.html) so each core on your server gets utilized. Doesn't look like this, looking at the source, is doing that so unless they're manually running an app instance on each core and load balancing between them then they may be underutilizing their hardware.
At ClassDojo we support many thousands of concurrent users on node.js using cluster to create multiple worker processes and using multiple boxes with Amazon ELB in front of them. All static assets are served from the CDN.
Handling your state in memory is fine for examples such as this, but in general you should defer all state to the database layer or use something like redis. That way your app server will remain entirely stateless so any node.js process on any box can serve a request identically - you can just scale up by adding additional boxes.
A very fine way is to have multiple processes and connect to any at random. Now, use a message queue, say like redis, or rabbit mq to read messages and delete from queue. Since a client is only connected to only one of the servers, it eliminates the chances of sending the same message more than once to the same client. This helps in splitting the incoming messages and the outgoing(which is usually far bigger, since one message is delivered to all others in a chat room).
Any tips or explanations would be most welcome.