Hacker Newsnew | past | comments | ask | show | jobs | submit | gregoriol's commentslogin

You are missing one important part: maintenance. While on a managed service, dozens of hours of maintenance are done by someone, when you are self-hosting, you'll be doing 3 times that, because you can't know all the details of making so many tools work, because each tool will have to be upgraded at some point and the upgrade will fail, because you have to test you backups, and many many more things to do in the long run.

So yeah, it's fun. But don't under-estimate that time, it could easily be your time spent with friend or family.


I have been self hosting for years. The maintenance is minimal to nonexistent. You are conflating modern SaaS with a stable OSS docker image.

Keeping services running is fairly trivial. Getting to parity with the operationalization you get from a cloud platform takes more ongoing work.

I have a homelab that supports a number of services for my family. I have offsite backups (rsync.net for most data, a server sitting at our cottage for our media library), alerting, and some redundancy for hardware failures.

Right now, I have a few things I need to fix: - one of the nodes didn't boot back up after a power outage last fall; need to hook up a KVM to troubleshoot - cottage internet has been down since a power outage, so those backups are behind (I'm assuming it's something stupid, like I forgot to change the BIOS to power on automatically on the new router I just put in) - various services occasionally throw alerts at me

I have a much more complex setup than necessary (k8s in a homelab is overkill), but even the simplest system still needs backups if you care at all about your data. To be fair, cloud services aren't immune to this, either (the failure mode is more likely to be something like your account getting compromised, rather than a hardware failure).


You're spending that much time on it because you're doing too much. Your use of the term "homelab" is telling. I have:

* A rented VPS that's been running for ~15 years without any major issues, only a couple hours a month of maintenance.

* A small NUC-like device connected to the TV for media. Requires near-zero maintenance.

* A self-built 5-drive NAS based around a Raspberry Pi CM4 with a carrier board built for NAS/networking uses. Requires near-zero maintenance.

* A Raspberry Pi running some home automation stuff. This one requires a little more effort because the hardware it talks to is flaky, as is some of the software, so maybe 2-3 hours a month.

The basics (internet access itself) are just a commodity cable modem, a commodity router running a manufacturer-maintained OpenWRT derivative, a pair of consumer-grade APs reflashed with OpenWRT, and a few consumer-grade switches. There's no reason for me to roll my own here, and I don't want to be on the hook for it when it breaks. And if any of the stuff in the bulleted list breaks, it can sit for days or weeks if I don't feel like touching it, because it's not essential.

And yes, I've hard hardware failures and botched software upgrades. They take time to resolve. But it's not a big burden, and I don't spent much time on this stuff.

> I have a much more complex setup than necessary

Yup.

> Getting to parity with the operationalization you get from a cloud platform takes more ongoing work.

You don't need this. Trying to get even remotely there will eat up your time, and that time is better spent doing something else. Unless you enjoy doing that, which is fine, but say that, and don't try to claim that self-hosting necessarily takes up a lot of time.


It's definitely mostly a hobby, but I also want to get something close to the dependability of a cloud offering.

I started small, with just a Raspberry Pi running Home Assistant, then Proxmox on an old laptop... growing to what I have now. Each iteration has added complexity, but it's also added capability and resiliency.


A hidden cost of self-hosting.

I love self-hosting and run tons of services that I use daily. The thought of random hardware failures scares me, though. Troubleshooting hardware failure is hard and time consuming. Having spare minipcs is expensive. My NAS server failing would have the biggest impact, however.


Other than the firewall (itself a minipc), I only have one server where a failure would cause issues: it's connected to the HDDs I use for high-capacity storage, and has a GPU that Jellyfin uses for transcoding. That would only cause Jellyfin to stop working—the other services that have lower storage needs would continue working, since their storage is replicated across multiple nodes using Longhorn.

Kubernetes adds a lot of complexity initially, but it does make it easier to add fault tolerance for hardware failures, especially in conjunction with a replicating filesystem provider like Longhorn. I only knew that I had a failed node because some services didn't come back up until I drained and cordoned the node from the cluster (looks like there are various projects to automate this—I should look into those).


This point is oversold.

Sure - self hosting takes a bit more work. It usually pays for itself in saved costs (ex - if you weren't doing this work, you're paying money which you needed to do work for to have it done for you.)

Cloud costs haven't actually gotten much cheaper (but the base hardware HAS - even now during these inflated costs), and now every bit of software tries to bill you monthly.

Further, if you're not putting services open on the web - you actually don't need to update all that often. Especially not the services themselves.

Honestly - part of the benefit of self-hosting is that I can choose whether I really want to make that update to latest, and whether the features matter to me. Often... they don't.

---

Consider: Most people are running outdated IP provided routers with known vulnerabilities that haven't been updated in literally years. They do ok.


Much easier with AI. Went from Webhosting all-in package + NAS to Hetzner Storage Share and a separate Emailprovider (Runbox). After a short time I dumped the Nextcloud instance and moved on to a Hetzner VPS with five docker containers, Caddy, proper authentication and all. Plus a Storage Box. Blogging/Homepage as Cloudflare Pages, fed by Github, domains from CF and porkbun, Tailscale, etc., etc. ad nauseam, NAS still there.

Most of this I didn't for many years because it is not my core competence (in particular the security aspects). Properly fleshed-out explanations from any decent AI will catapult you to this point in no time. Maintenance? Almost zero.

p.s. Admittedly, it's not a true self-hosting solution, but the approach is similar and ultimately leads to that as well.


Same for me. I’m not an engineer (but worked with them for 2 decades) and AI has been amazing for me self hosting.

For example I could never setup Traefik correctly because I just found it too complicated. Now I have Claude I finally got it setup just the way I want it - the ROI on my Claude subscription has been off the scale!

The obvious downside is that I might not really know what exactly I’m implementing and why. I do read all the explanations that Claude gives but it’s hard to retain this information. So there are pros and cons to relying on AI for this kind of stuff I suppose


Since using NixOS for my home server, I've found it to Just Work™ flawlessly every time.

If anyone reading this has struggled with servers accumulating cruft, and requiring maintainance, I recommend NixOS.


Agreed. NixOS + Tailscale is 99% there for me. Using Claude Code to deal with whatever other package I need built with nix while I'm working on $day_job things helps get me to a fully working system. Besides the fact that running containers via podman or docker (your choice) is super easy via a NixOS config.

Combine that with deploy-rs or similar and you have a very very stable way to deploy software with solid rollback support and easy to debug config issues (it's just files in the ./result symlink!)


There are a lot of people that have made a lot of money and careers because developers in particular don't want to know or don't care to know how to manage this stuff.

They need to get over it.

Pick up some Ansible and or Terraform/tofu and automate away. It can be easy or as involved as you want it to be.


yes, I do agree with that sentiment, there are times when I'm spending way too much time restarting a service that went down, but it doesn't take as long as it used to, especially with AI assistance nowadays. If I'm spending too much time on it, then I'm also probably learning something along the way, so I don't mind spending that time.

Self hosting doesn't take as much effort as the cloud providers want us to believe.

I've been renting a VPS for 15-20 years from a small provider. It runs a webserver, gitea instance, matrix homeserver, and a bunch of other things, and I spend maybe an hour or two per month maintaining it. Add a few non-recurring hours if I want to set up something new or need to change something big.

Self hosting is not hard. It's not scary. It's not a security nightmare. All of that is just FUD.


That is going to be used... less than Swift for the servers

Interestingly, Kotlin has a pretty solid cross-platform story.

I'd pick it over Swift if targeting Android since it can build and run in the JVM as well as natively -- and has Swift/ObjC interop. Its also very usable on the server if you wanted to, since you can use it in place of Java and tap into the very mature JVM ecosystem. If that's what you're into.

And I have a lot more faith in JetBrains being good stewards of the language rather than Apple, who have a weird collection of priorities.


Kotlin is practically a no-brainer when you have JVM at your finger tips, versus something like Swift which is comparatively young.

I tried to use Vapor with Swift recently and struggled to get something working because the documentation looked comprehensive, but had a lot of gaps. I ended up throwing it out because I didn't have the time to dig through the source to understand how to do something, when I could use a mature framework in any other language instead.

The promise is there but I'm just not ready to invest. My youthful days of unbounded curiosity are coming to an end and these days I just want to get something done without much faff.


Mind you, Kotlin/Native (which is what gets used when you're compiling for iOS) doesn't have access to the JVM.

However, the Kotlin community is fundamentally all about open source, whereas Apple & iOS Devs have an allergy to it. The quality and quantity is already miles above the vast majority of what's in the Swift ecosystem. https://klibs.io has all the native compatible libs. And if you're targeting a platform where the JVM is available then yeah, it's massive. Compose makes UI tolerable compared to JWT too. Even large projects like Spring are Kotlin first nowadays.


JetBrains has monetary interest in promoting Kotlin beyond Android, there’s zero incentive to promote Swift as the language outside of iOS and Mac. They don’t need to capture minds of devs for them to develop for Apple devices.

I don't know. Could be nice for those developers that prioritize iOS and now they could keep writing Swift also for Android.

Is it gonna be what you primarily use if you wanna write an Android app? Probably not.

Is it gonna displace react Native? Probably not. Is it gonna reach the levels of flutter? Maybe.


The language doesn’t really matter. The underlying SDK/framework is where the action is at.

However, I suspect that we may not be too far off, from LLMs being the true cross-platform system. You feed the same requirements, with different targets, and it generates full native apps.


> from LLMs being the true cross-platform system

Fully agree. I have zero Swift knowledge and currently use LLM to write a native app. I'm well aware of the SDKs and concepts in iOS development, so even if something's wrong I got intuition where to look and how to make the LLM fix it.


I’m not sure about that but porting libraries from one language to another seems well within their capabilities.

I don't think we're there yet, but I suspect that it's just a matter of time.

It would certainly be quite profitable. Money tends to drive progress.


> Is it gonna reach the levels of flutter? Maybe.

Never. It won’t even reach Compose level, Flutter level DX is unattainable for any framework outside Flutter.


This is going to be used much more than Swift for servers. Swift is a primarily client-side mobile language. It makes sense that you tap into reusing the logic.

Just like .NET for linux... right? RIGHT?

At least a very common .NET web app framework is often deployed on linux. Powershell for linux might be more apt lol

By excellent, you mean excellent at not being able to talk to someone about your real world problem and need to rely on your linkedin contacts to find someone to talk to?

In any serious business, you don't want people to use their personal Apple IDs: that could lock their company provided devices for ever when they leave, you also don't want to buy them apps that you won't be able to re-use when they leave, ...

> that could lock their company provided devices for ever when they leave

MDMs like JamF offer override codes to disable activation lock. Hasn’t been an issue in my experience.


This is the worst: people who use them hate the receiver

The poster has built something that, while technically interesting, is profoundly annoying as a user and deserves to be backlashed to prevent more of this kind of stuff to be built

LLMs can be exhausting because you are processing more information per hour than you used to do: first because LLMs know much more than you did and they provide that to you at a quick pace, second because you have to think about more difficult stuff like decisions/architecture/edge-cases than you did before when you managed on average more smaller details and less big picture.

Working with an LLM is kind of like working with one (or more) junior developer(s): try that first, then you'll really see how LLMs can be (less) exhausting.


Here you are describing any job, nothing related to LLMs.


Why contact Algolia when it is the users' responsibility to handle their keys? Contact all the users.


The comment you're responding to is output of an LLM.


Note all the very similar grey comments at the bottom of the page.


ill take that as a compliment, my writing finally passed the turing test


If this happens so often, perhaps Algolia should improve their stuff to prevent this? For example, by implementing a dedicated search endpoint that doesn't accept normal API keys, but only dedicated read-only keys.


because if it's easy to dangerously use one's product that reflect poorly on the product. Algolia should help its clients from making silly mistakes.


It is the users responsibility to operate foot guns responsibly.


There is show and there is reality: French TGV achieved 574,8 km/h in 2007 for show, but it was under specific conditions, not in real world conditions.

While it is technically proven that it is possible to do 400+km/h on rail, it's not practical: maintenance, wear, noise, turns, embranchement, and overall cost, ... many considerations that are probably less important for Chinese railway now, which needs some "show".


You should update your data; in 2013, China's high-speed rail reached 605 km/h on experimental lines. The CR450 is scheduled to enter commercial service in 2026.


Sorry if I wasn't clear but was not talking about demo runs. There are plenty of those. Was more meaning operational speeds having a limit.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: