I've looked at it a few times but haven't been able to get my head around the concept. Reading the intro documentation I haven't been able to map the concept of "nodes" to a server, database, and GPU server.
* You describe what type of state you need (Deployment = one or more Pods running your container, Service = expose the deployment)
* Kubernetes runs a control loop that reconciles the current cluster's state with your desired state
Nodes are the virtual machines on which everything runs, you don't need to interact with them unless you have special requirements (e.g. GPU instances). In that case, you annotate GPU instances with some type of label (like `has-gpu=true`), then in your deployment you add a node affinity saying it needs to run on nodes that have that label. Kubernetes will schedule it for you if there's any node matching.
The nodes are just the individual machines in your K8s cluster.
Which one(s) run your API/server, your database, etc is basically arbitrary. You’re free to let the K8s scheduler do as it see fits, or you can get increasing degrees of control by adjusting things like node and pod affinity/anti-affinity, etc.