Hacker News new | past | comments | ask | show | jobs | submit login
Hello Heroku World - Autobenching Heroku (evanweaver.com)
37 points by iamclovin on March 1, 2012 | hide | past | favorite | 12 comments



Interesting post with impressively confusing graphing. Two splines for per color, but stroked differently, with nothing two indicate which of the two y-axes the particular stroke of a color belongs to. The legend shows only color, but stroking an empty box, for some reason.


I had to read it 5 times to understand the meaning...

From what I understand: the more dyno you have, the less each dyno can do. Which is funny when you think about it, becuause your Heroku bill increases linearly with the number of dyno.


"In the below graphs, the response rate is the solid line (and the left y axis); connection errors as a percentage are the dashed line (and the right y axis). The graphs are heavily splined, as suits a meaningless micro-benchmark."


if httperf is using select() with 65k odd sockets then that could be a bottleneck...

the "dumb" C impl can be faster too! it should probably fork() a few times so multiple accept()s can fight over the socket, which should probably be put into in non-blocking mode, and should also turn nagle off.

for even more points you can reduce the copying of the trivial response from userspace into the kernel using sendfile()/splice(), if you mlock() it into RAM first!

the printf likely reduces the throughput by a large amount too!

(I've spent far too much fiddling with various syscalls for synthetic benchmarks!)


Post says the driver was benched at 25k rps. But yeah that C implementation sucks.


Looking at the app he used for bottle, it only replies (dynamically) on /hello/:name .. the others (Sinatra, Node), are configured to reply on "/" as the route. Is that the reason he got such consistently bad results for bottle? Did he adjust the route used? Would like to see an actual equivalent "/" = "Hello World" app tested..


Python/Bottle dismal performance here is in line with Nicholas Piël's fingings regarding WSGIRef server:

Disqualified servers: (...) WSGIRef, I obtained a reply rate of 352 but it stopped reacting when we passed the 1900 RPS mark

http://nichol.as/benchmark-of-python-web-servers


Hm... I briefly glanced over the graphs, and the results don't make sense to me. Every single server he tested must do a C accept call at some point, plus some (a lot of) extra stuff. If those servers take less time then plain accept call, doesn't it just indicate that the benchmark is flawed? Am I missing something?


AFAIK the issue is the way they listen() to the socket, as well as whether the accept is concurrent in some way.


Does anyone know of a good alternative to ab or httperf for load testing high-performance HTTP servers? With httperf I can't see to coax more than ~25,000 requests/sec out of my framework built on Jetty, but ab can easily do > 45,000 req/s. Both seem to be limited by the load generator, not the server.


There is siege, which looks ok (I haven't used it): http://www.joedog.org/siege-home/


Interesting.

Didn't know Jetty was this bad. Or perhaps there are configuration options that influence this?

The author concludes that Tomcat collapses, but in the 4 and 7 dynos scenarios there doesn't seem to be that much difference between Tomcat and Finagle (which the author says did ok), but perhaps I'm reading the graphs wrong?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: