Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For clarity, do you mean that Google can, for example, run to 99% saturation all the time, whereas a typical ISP might have 30-40% average, with peaks to full saturation that causes high latency/packet loss when it occurs?


Yes, that's about right. Since they control both sides of the link, they can manage the flow from higher up on the [software] stack. Basically, if the link is getting saturated, the distributed system simply throttles some requests upstream by diverting traffic from places that result in traffic over that link. (And of course this requires a very complex control plane, but doable, and with proper [secondary] controls it probably stays understandable, manageable, and doesn't go haywire when shit hits the fan.)


So I wonder if that means they can do TCP control flow without dropping packets.


I guess they do drop packets (it's the best - easiest/cheapest/cleanest - way to backpropagate pressure - aka backpressure), but they watch for it a lot more vigorously. Also as I understand they try to separate long lived connections (between DCs) from internal short lived traffic. Different teams, different patterns, different control structures.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: