Hacker Newsnew | past | comments | ask | show | jobs | submit | V1's commentslogin

To bad that networks/proxies usually block websocket connections causing this solution to fail.


You could also just use Primus which exposes a Stream compatible wrapper by default for the client and server.

https://github.com/primus/primus

(Primus uses engine.io,sockjs,browserchannel,websockets internally so you're no longer locked into a specific framework)


someone should do primus vs socket.io

why would I need engine.io,websockets for example when engine.io already uses websockets if it can upgrade the connection to them?


You can't use both. In Primus you can select only one transformer, listed above by V1, but if something doesn't work, you can just switch it with a single line of code. No rewrites, no need to learn a new API.


I know you can't use both, but doesn't engine.io already support websockets?


Yes it does, but if you are building something that works perfectly only using websockets (e.g. a command-line utility), why would you use engine.io?

You can tell engine.io to only use websockets, but what's the point if you can use websockets directly and have the same API?


Even Google is using it ;)


I'll be more then happy to see if my configurations can be tweaked. And I was surprised as well, but it seems that it always boils down to the configuration files a small change can sometimes yield great results as seen with a specifying ciphers for SSL.


Like I said, running as a simple reverse-proxy, they should be pretty close. The added latency is really the differentiator when going to a single backend, and that's about all you're measuring. You're control itself is still <400rps.

Now add dozens of backend servers, and start throwing a couple orders of magnitude more traffic at it, and the differences will really start to show.

Please note: I'm not saying your numbers aren't useful. Many people will be using a setup like this, and would like to know what the differences would be, if any.


Thanks for you feedback, I did do some testing with multiple backends (4 different servers) but it didn't show any substantial changes. I'll see if I can do some in depth in a while.


It's not a `local` server. The proxy, websocket server and the benchmark utility are all on different servers.


I meant "local" as in "not on the other side of a continent". Here's me pinging a nearby DNS server that's 40 miles away from my desk:

  % ping -c 10 -A 4.2.2.1
  PING 4.2.2.1 (4.2.2.1) 56(84) bytes of data.
  64 bytes from 4.2.2.1: icmp_req=1 ttl=56 time=3.24 ms
  64 bytes from 4.2.2.1: icmp_req=2 ttl=56 time=2.88 ms
  64 bytes from 4.2.2.1: icmp_req=3 ttl=56 time=2.95 ms
  64 bytes from 4.2.2.1: icmp_req=4 ttl=56 time=2.90 ms
  64 bytes from 4.2.2.1: icmp_req=5 ttl=56 time=2.95 ms
  64 bytes from 4.2.2.1: icmp_req=6 ttl=56 time=2.91 ms
  64 bytes from 4.2.2.1: icmp_req=7 ttl=56 time=2.90 ms
  64 bytes from 4.2.2.1: icmp_req=8 ttl=56 time=2.87 ms
  64 bytes from 4.2.2.1: icmp_req=9 ttl=56 time=2.94 ms
  64 bytes from 4.2.2.1: icmp_req=10 ttl=56 time=2.94 ms

  --- 4.2.2.1 ping statistics ---
  10 packets transmitted, 10 received, 0% packet loss, time 1806ms
  rtt min/avg/max/mdev = 2.875/2.952/3.247/0.112 ms, ipg/ewma 200.705/3.019 ms
I can't imagine why a local websocket echo service should be 5x slower that this.


The `http-server-close` option did not change anything. But by adding the ciphers it was able to squeeze out the same performance as Nginx. I'll update the tests accordingly. Thanks for the heads up!.


Thanks for testing these. I'm glad HAProxy passes muster with regards to SSL as we're currently moving to 1.5-dev18 from apache SSL offloading.


hipache is build on top of http-proxy. That's why I haven't included it in the tests. They have seem to have switched to a fork of http-proxy but there aren't big (if any) performance modifications as far as i've seen from the commits.


I'll re-run the test with the advised changes and see if the performance improves.


Because we people obviously don't have a datacenter in our own basement. And the common mistake people make when benchmarking is running the servers on their own machine and then use the same machine to benchmark the server it's running.

You need to have multiple (powerful) machines for this. And also, spinning up machines in the cloud is quite easy to do and allows people to reproduce the same test results because you have access to exactly the same environment.


>You need to have multiple (powerful) machines for this

Oh really? For simple http 'hello world' comparision (where you are interested in relative numbers, not absolute ones) bechmark?

All you need is one old and slow laptop (with test contenders) and one modern and mighty (with test script). The only thing you have to be sure about is the test script can generate more load than test contenders can handle. Even if the old laptop isn't slow enough you can just add some predictable and stable load to cpu/disks/network/whatever is a bottleneck for them - you may use tools that are available for that or even quick & dirty hacks like one-liner 'while(1) {do some math}' that effectively make your 2-cores CPU 1-core while running with high system priority.


You can reproduce them? So you can match the loads running on all the other VMs on the same host then?


You could use cluster compute instances which use hardware virtualisation, and I think gives you the whole machine. With spot pricing, you could run them for about 21¢ an hour.


Are you referring to the cipher verification? I used:

  openssl s_client -host localhost -port 8082
Which is a openssl command. These settings were used for testing SSL: https://github.com/observing/balancerbattle/blob/master/ngin...

See https://gist.github.com/3rd-Eden/5345018 for the output of the openssl s_client for those ciphers. You'll see that `cipher : RC4-SHA` is used here. Which is one of the fastest if not the fastest cipher available.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: