It doesn’t matter if you have five visitors or 5 million you presumably want your website to stay online so you need something that restarts the processes when they die, etc. Honestly building and publishing a docker image is super simple. There are numerous hosted kubernetes solutions to deploy it to as well. I don’t think that using next JS or using Kubernetes means you are inherently over complicating anything, and this is coming from someone who prefers minimal technology (I personally don’t use next JS, but I see the value in it for others)
You don't need to deploy orchestrators to serve static html/js files.
> In your opinion, because they’re stupid?
If they are doing that to keep nginx running, that might be the case or they are super clever to do that for a higher salary at the cost of their employer.
Publishing a Docker image is simple. Managing a Kubernetes cluster, even hosted, may be much more complex, for no added benefit.
Throw your container on a VM, make systemd or even runit keep it running. It scales fine to half dozen boxes if needed. Same for your Postgres DB. For extra fancy points, keep a hot standby / read replica, and pick up any of the manual switch-over scripts.
Should keep you running until dozen million DAU, with half-day spent on the initial setup.
SRE here. First off, updating control plane/kubelet is nightmare in itself but let's assume you are running managed Kubernetes somewhere so that's taken care of.
Kubernetes out of the box is not ready to go. What Ingress are you going to use? Ingress-Nginx. Cool cool, How is that getting deployed? Helm Chart. How do we keep track of that being kept up to date and who deployed it? ArgoCD. So who is going to teach all CRDs for Argo and how they work with each other? SREs. You understand we dislike the devs and last thing we want to do is hold classes they don't want to learn? JUST BUILD A PLATFORM. And here we go.
So out of box, most people deploy Kubernetes + 8 "plugins" and it's Frankenstein monster that's you have to manage or it will decide to kill all the workloads one day.
EDIT: I'm didn't even discuss certificates for that ingress or all monitoring/logging this cluster will need to make sure it's properly operating.
It has a terrible first time user experience. Once you accept that developer experience means user experience for 19/20 programmers, it’s imperfect. For example even though programming is all about reading, the average programmer looks more like the average person than it ever was, and the average person hates reading. So Kubernetes bad. IMO this is why so much success has been found building SaaS on top of Kubernetes.
Because you have to have and manage a Kubernetes cluster, that is not something easy. Moreover scaling is not always automatic, you don't tell Kubernetes "scale this application" and it works, if the application is a monolith. You have to write it using a microservices architecture, something much more difficult.
Also, managing a CI pipeline isn't something very easy to start with, there are entire teams in companies that only do that.
In contrast running a simple server is much more simple, you install a Linux OS (probably Ubuntu server, or Debian), install a web stack (for example Apache, PHP, Mysql or Postgres these days), copy the files of your website in the root directory (/var/www), or if you are fancy pull a git repository on the server so you can update it with a simple "git pull", if you need more websites on the same machine configure virtualhosts (or use one of the many software that have a GUI to configure virtualhosts).
The second solution is much more simple, in fact is the most used solution as far as I see. Most websites doesn't need high availability, if the website of the bike repair shop that I have 100 meters down the street goes offline for a couple of minutes or even one day because I'm doing maintenance on the server really nobody notices it.
It’s not necessarily that it’s hard, but to be effective with kubernetes you need to understand a lot of infrastructure concepts like DNS, load balancing strategies, docker, storage drivers, service discovery, what exactly a pod is, what exactly a container is, etc.
It’s a lot of up front knowledge needed for marginal benefit at small to mid scales.
There’s a lot of steps if the “write your deployment yaml” step.
If you are small or mid scale and you need lots of compute between 8
5 to 8 on Monday till Friday it basically the best thing you can you to schedule your workload.
If you need some things like load balancing and zero downtime deploys etc you probably will build your own k8s which is often worse
Or you can deploy on app engine, lambda, firebase, etc.
Kubernetes is not the only game in town.
Sometimes shipping asap is more important.
As with just about anything in tech, it’s about tradeoffs.
I may be biased, since I worked in devops before k8s came out, but building a decently scalable system architecture with load balancing and rolling deployments is pretty straightforward with monolithic systems. Especially since service discovery isn’t really a concern. Horizontal scaling works well in many many cases.
Realistically any app simple enough to be deployed by hand with a few docker containers will not be difficult to convert to k8s anyway.