Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I migrated all of my services to k8s in the last ~6 months. The biggest hurdle was the development environment (testing and deployment pipelines). I ended up with a homebrewn strategy which happens to work really well.

# Local development / testing

I use "minikube" for developing and testing each service locally. I use a micro service architecture in which each service can be tested in isolation. If all tests pass, I create a Helm Chart with a new version for the service and push it to a private Helm Repo. This allows for fast dev/test cycles.

These are the tasks that I run from a "build" script:

* install: Install the service in your minikube cluster

* delete: Delete the service from your minikube cluster

* build: Build all artifacts, docker images, helm charts, etc.

* test: Restart pods in minikube cluster and run tests.

* deploy: Push Helm Chart / Docker images to private registry.

This fits in a 200 LOC Python script. The script relies on a library though, which contains most of the code that does the heavy lifting. I use that lib for for all micro-services which I deploy to k8s.

# Testing in a dev cluster

If local testing succeeds, I proceed testing the service in a dev cluster. The dev cluster is a (temporary) clone of the production cluster, running services with a domain-prefix (e.g. dev123.foo.com, dev-peter.foo.com). You can clone data from the production cluster via volume snapshots if you need. If you have multiple people in your org, each person could spawn their own dev clusters e.g. dev-peter.foo.com, dev-sarah.foo.com.

I install the new version of the micro-service in the dev-cluster via `helm install` and start testing.

These are the steps that need automation for cloning the prod cluster:

* Register nodes and spawn clean k8s cluster.

* Create prefixed subdomains and link them to the k8s master.

* Create new storage volumes or clone those from the production cluster or somewhere else.

* Update the domains and the volume IDs and run all cluster configs.

I haven't automated all of these steps, since I don't need to spawn new dev clusters too often. It takes about 20 minutes to clone an entire cluster, including 10 minutes of waiting for the nodes to come up. I'm going to automate most of this soon.

# Deploy in prod cluster

If the above tests pass I run `helm upgrade` for the service in the production cluster.

This works really well.



thanks for the details, and sorry for the perhaps distracting question: how do you handle DNS for the servers on your foo.com? Did you have to provide nameservers to your registrar so K8s manages the DNS? This is something I don't see addressed so often in k8s tutorials which usually assume minikube or GCK.


> thanks for the details, and sorry for the perhaps distracting question: how do you handle DNS for the servers on your foo.com?

No worries :) I use digitalocean for the nodes, storage volumes, etc. and namecheap to register the domains.

At namecheap I simply use the "Custom DNS" option for each domain to apply the digitalocean nameservers. I only have to do this once for every domain.

After that is done, I can use the digitalocean API, the "doctl" tool or the web-interface to handle domain records, subdomains, etc.

In order to connect a domain/subdomain to a k8s cluster, I point the domain/subdomain to the master node of a cluster. In each k8s cluster I use an "ingress" (nginx) [0] to handle the incoming traffic (directing each request to the right service/pod. You can think of it as a loadbalancer that runs within your cluster).

This strategy would work on other cloud providers (Azure, gcloud, etc) as well, I guess.

0: https://github.com/kubernetes/ingress-nginx


That was really helpful, thanks!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: