> Don't bother reducing keepalive, just disable it altogether. Unless you have a very specific use case it is more trouble than it is worth.
Bad idea. This way you're actively increasing the latency of your site. This way, for each asset that has to be fetched you're forcing the client to open a new connection, which can add more than 150 ms of delay per item (thanks to the three way TCP handshake).
What I would suggest is setting the KeepAlive timeout to a value that could handle each individual page-load. This way all the page elements will have a chance to use the connections that has been already opened.
With threaded Apache (mpm_worker), I have a huge keepalive set.
It's great for an ajax-heavy site, especially when it's all behind SSL. Using a CustomLog I log total request time, from connection time to the request has been served (conditional log when the request is handled by the backend) and I can see it's halved since I could use the threaded Apache.
Currently I have 100 threads per Apache process, and ~20 of them handling 2000 idle connections. I'm sure this can be tweaked some more.
Apaches manages SSL etc. and just proxies to my application servers.
There's also an event-based Apache module which I haven't tried.
Actually, I've tested this. You get 10ms of extra delay per request, not 150.
There might be some magic value at which KeepAlive will be helpful during non-peak periods without crippling the server during peak periods, but for a well-engineered site, the extra 10ms delay per request shouldn't be a big enough deal to warrant risking a full-on site outage later on.
Light travels less than two thousand miles in 10 ms, and TCP requires three one-way trips before starting the first request on a new connection. Anybody more than 620 miles away (about half a time zone) is guaranteed to have a higher ping time than that.
You're absolutely right, I didn't think about that. However, I did perform the test(s) from Sacramento, CA to a server in Newark, New Jersey -- a distance of 2,810 miles according to Google.
I was fairly careful with the test(s), and the 10ms difference seemed to be consistent. So that's odd. I need to investigate that further.
Bad idea. This way you're actively increasing the latency of your site. This way, for each asset that has to be fetched you're forcing the client to open a new connection, which can add more than 150 ms of delay per item (thanks to the three way TCP handshake).
What I would suggest is setting the KeepAlive timeout to a value that could handle each individual page-load. This way all the page elements will have a chance to use the connections that has been already opened.