> And the network is usually higher latency than a local SSD anyway.
Until your write outrun the disk performance/page cache and your disk I/O performance spikes. Linux used to be really bad at this when memory cgroups were involved until a cpuple of years ago.
> If a ramdisk is sufficient for your use case, why would you use a Raft-based distributed networked consensus database like etcd in the first place?
Because at the time Kubernetes required it. If the adapters to other databases existed at the time I would have tested them out.
> Kubernetes uses etcd for storing the cluster state. Do you update the cluster state more than 1000 times per second? Curious what operation needs that.
Steady state in a medium to large cluster exceeds that. At the time I was looking at these etcd issues I was running fleets of 200+ node clusters and hitting a scaling wall around 200-300. These days I use a major Kubernetes service that does not use etcd behind the scenes and my fleets can scale up to 15000 nodes at the extreme end.
Until your write outrun the disk performance/page cache and your disk I/O performance spikes. Linux used to be really bad at this when memory cgroups were involved until a cpuple of years ago.
> If a ramdisk is sufficient for your use case, why would you use a Raft-based distributed networked consensus database like etcd in the first place?
Because at the time Kubernetes required it. If the adapters to other databases existed at the time I would have tested them out.
> Kubernetes uses etcd for storing the cluster state. Do you update the cluster state more than 1000 times per second? Curious what operation needs that.
Steady state in a medium to large cluster exceeds that. At the time I was looking at these etcd issues I was running fleets of 200+ node clusters and hitting a scaling wall around 200-300. These days I use a major Kubernetes service that does not use etcd behind the scenes and my fleets can scale up to 15000 nodes at the extreme end.