The advantage of K8s isn't really in the virtualization technique used, but in the orchestration it can be made to perform for you. You can for sure configure K8s to use a host per container, if this is what you want.
Example of thing that is pretty straightforward in K8s and much less straightforward outside of it.
1. For compliance reasons, you need to make sure that your underlying OS is patched with security updates.
2. This means you need to reboot the OS every X time.
3. You want to do this without downtime.
4. You have a replicated architecture, so you know you have at least two copies of each service you run (and each can handle the required traffic).
In K8s, this operation can be as simple as:
1. Mark your old nodes as unschedulable.
2. Drain your old nodes (which will grab new nodes from your cloud provider, installing the updated machine image).
3. Delete your old nodes.
The exact steps will differ based on your use case, but that's basically it.
Steps you didn't need to think about here:
1. If I'm about to take a node down, do I have enough redundancy to handle its loss? K8s understands the health of your software, and you have a policy configured so it understands if taking down a container will cause an outage (and avoid this). Note: third party tech will be naturally compatible - if you use Elastic's cloud-on-k8s operator to run ES, it'll appropriately migrate from host to host too, without downtime. Likewise, the same script will run on AWS, Azure, GCP.
2. How fast can I run this? If building this logic yourself, you'll probably run the upgrade one node at a time so as to not have to think about the different services you run. But if it takes 15 minutes to run a full upgrade, you can now only upgrade 100 hosts each day. K8s will run whatever it can, as soon as it can without you having to think about it.
3. What happens if concurrent operations need to be run (e.g. scale-up, scale-down)? With K8s, this is a perfectly reasonable thing to do.
4. Does this need to be monitored? This is a fairly standard K8s workflow, with most components identical to standard scale-up/scale-down operations. Most components will be exercised all the time.
Generally I've been impressed by how straightforward it's been to remove the edge cases, to make complex tech fit well with other complex tech.
A while back we upgraded between two CentOS versions. In such a case it's recommended to reinstall the OS - there's not a clear upgrade path. In K8s, this would have been the same set of steps as the above. In many orgs, this would be a far more manual process.
Example of thing that is pretty straightforward in K8s and much less straightforward outside of it.
1. For compliance reasons, you need to make sure that your underlying OS is patched with security updates.
2. This means you need to reboot the OS every X time.
3. You want to do this without downtime.
4. You have a replicated architecture, so you know you have at least two copies of each service you run (and each can handle the required traffic).
In K8s, this operation can be as simple as:
1. Mark your old nodes as unschedulable.
2. Drain your old nodes (which will grab new nodes from your cloud provider, installing the updated machine image).
3. Delete your old nodes.
The exact steps will differ based on your use case, but that's basically it.
Steps you didn't need to think about here:
1. If I'm about to take a node down, do I have enough redundancy to handle its loss? K8s understands the health of your software, and you have a policy configured so it understands if taking down a container will cause an outage (and avoid this). Note: third party tech will be naturally compatible - if you use Elastic's cloud-on-k8s operator to run ES, it'll appropriately migrate from host to host too, without downtime. Likewise, the same script will run on AWS, Azure, GCP.
2. How fast can I run this? If building this logic yourself, you'll probably run the upgrade one node at a time so as to not have to think about the different services you run. But if it takes 15 minutes to run a full upgrade, you can now only upgrade 100 hosts each day. K8s will run whatever it can, as soon as it can without you having to think about it.
3. What happens if concurrent operations need to be run (e.g. scale-up, scale-down)? With K8s, this is a perfectly reasonable thing to do.
4. Does this need to be monitored? This is a fairly standard K8s workflow, with most components identical to standard scale-up/scale-down operations. Most components will be exercised all the time.
Generally I've been impressed by how straightforward it's been to remove the edge cases, to make complex tech fit well with other complex tech.
A while back we upgraded between two CentOS versions. In such a case it's recommended to reinstall the OS - there's not a clear upgrade path. In K8s, this would have been the same set of steps as the above. In many orgs, this would be a far more manual process.