There was an old design I encountered in my distributed computing class, and noticed in the world having been primed to look for it, where you break ties in distributed systems with a supervisor whose only purpose was to break ties. In a system that only need 2 or 4 nodes to satisfy demand, the cost of running a 3rd of 5th node only to break ties results in a lot of operational cost. So you created a process that understood the protocol but did not retain the data, whose sole purpose was to break split brain ties.
Then we settled into an era where server rooms grew and workloads demanded horizontal scaling and for the high profile users running an odd number of processes was a rounding error and we just stopped doing it.
But we also see this issue re-emerge with dev sandboxes. Running three copies of Kafka, Redis, Consul, Mongo, or god forbid all four, is just a lot for one laptop, and 50% more EC2 instances if you spin it up in the Cloud, one cluster per dev.
I don’t know much Kafka, so I’ll stick with Consul as a mental exercise. If you take something like consul, the voting logic should be pretty well contained. It’s the logic for catching up a restarted node and serving the data that’s the complex part.
Then we settled into an era where server rooms grew and workloads demanded horizontal scaling and for the high profile users running an odd number of processes was a rounding error and we just stopped doing it.
But we also see this issue re-emerge with dev sandboxes. Running three copies of Kafka, Redis, Consul, Mongo, or god forbid all four, is just a lot for one laptop, and 50% more EC2 instances if you spin it up in the Cloud, one cluster per dev.
I don’t know much Kafka, so I’ll stick with Consul as a mental exercise. If you take something like consul, the voting logic should be pretty well contained. It’s the logic for catching up a restarted node and serving the data that’s the complex part.