Hacker News new | past | comments | ask | show | jobs | submit login

This assumes that you're running in an environment where your servers are cattle and not pets, and in all fairness, not everyone is running large scale web platforms on some orchestration platform. I don't disagree that, even in a pets world one should know how to restore/rebuild a system, because without that, you don't have a sound BDR strategy.



Arguably, about 80% of those running their app on a cattle farm should really have gone with a pet cafe instead. Resumes would certainly be a lot less impressive, but they'd also have a lot less fires to put out and a significantly smaller infra bill.

But regarding the topic at hand, I don't think being able to manage these things with a graphical interface is necessarily a bad thing. It's basically user-space iDRAC/IPMI.


I maintain 3 servers. It's not worth automating the deployment.

I'll spend less time just setting them up by hand.

The company will survive a few hours of downtime.


Are there any tools that allow you to manage a server like a pet, yet ensure it can be restored/rebuild?

And, while with the analogy of pets, when you are on holiday, allow your neighbors to look after your pets?


There's no reason you can't use puppet/chef/ansible/whatever on pets!

The reason that (some) people don't do this is the cost/benefit analysis looks kind of weird. You'll spend a lot of time mucking around in puppet/chef/ansible/whatever for a single snowflake server, and it would be a lot faster to just go edit that config file directly.

In reality, proper backups and shell history can get you pretty far if you ever find you need to replicate a snowflake.


I have a homelab that is mostly pets (one or two servers that do a job, e.g. one DNS server, one VPN server), and I absolutely spend my time mucking about with Ansible to set them up. But it's awesome when I need to upgrade a server to a new OS version and I can just delete the entire VM and re-configure from scratch relatively fearlessly. Before my silly HaC (Homelab as Code) kick, it wasn't a huge deal to rebuild a server during an afternoon, reference docs and old notes, etc., but I prefer it this way.

Also Ansible is incredibly useful at my work and there's a very large overlap. Which is obviously the main motivation.


In my homelab, I use Portainer to manage my hosts. All of my workloads are installed as collections of Docker containers, and I'm slowly but surely migrating even single container installs to Compose stacks. With some real bare bones GitOps, those stack files can be in Git, and deploy to the host in Portainer, thus at least giving me the recipes to rebuild my environment should it ever be lost.


I've also stumbled into the same paradigm - everything as compose files checked into git, deployed onto portainer. IMO pretty nice and low maintenance.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: