Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

just use a regular server for mid/larger sized apps.

i started my web dev journey with JAMStack, Vercel, the "edge". everything is easy as long as one only deploys a full-stack NextJS app. the moment other apps come in, just use a server deployed as VPS and avoid "edge runtime hell". edge runtime hell refers to "you can't do this (function with over 2MB payload), you can't do that (because not supported by X)".

EDIT: I implicitly meant with "deploy as VPS" -> deploy your server via Render/Heroku/CI-CD instead of serverless functions running on the "edge".



I'm coming around to building everything for a VPS from the outset. There's a lot of upside to VPSes, such as:

1. Can be purchased as a fixed cost, usually at a rate that's much cheaper than on-demand pricing, and especially serverless--this tends to only get better with time as competition keeps prices low

2. It's "just" a Unix/Windows/Mac box, so the issues with runtime constraints you mention are bounded differently (and often more favorably); serverless is also just a box, but the constraints tend to be more onerous and limiting and it's not usually accessible in the same way

3. With containers, it's trivial to move between providers, so the hardware itself becomes fungible

4. On containers, I'm having a great time shipping Docker Compose configs--this works really well for the scale of application I'm targeting while avoiding the dreaded complexity of e.g. k8s

5. There's decades of high quality tooling already built and battle tested which makes operating VPSes much easier; the fact you can SSH into the machine, for instance, has huge leverage as an solo person working on independent products

Going forward, I'm planning to skip edge compute altogether unless there's a really compelling reason to want it. I should also mention that when a VPS is paired with a CDN, you can layer on bits of "edge compute" where it's warranted; or, you know, use it to cache static assets close to your users. :)

All-in-all it's kind of a funny return to where I started ~20 years or so ago with web development.


> it's kind of a funny return to where I started ~20 years or so ago with web development

It's a journey I've been going on too.

All the new platforms and paradigms that have emerged have had an initial shallow aura of helpfulness that has drawn me in, but almost exclusively when digging in I've realised that they create more issues than they solve, and/or introduce limitations that aren't worth it and force me to write janky, overly complicated code to work around that isn't comprehensible even a few weeks hence.

Maybe the most notable exception to this is containers. But in a sense they're an abstraction over the very same VPS paradigm. So that makes sense. It's not something new or different, just the same thing but with some advantages (and disadvantages too, obviously).


If VPSs had the same marketing and maybe a bit more polished standard tooling I have a feeling it would quickly gain traction and simplify life for most people. As well as prevent vendor lock-in, of course.

It’s in the SaaS business model. The incentives, even for fully open source like supabase, is misaligned with self-hosting. Even if they’re super honest and trying to be helpful, their fully managed globally available offering is going to have very different needs than self-hosters.

I actually more like the model of having FOSS where the company behind it offers consulting instead, to build, deploy and operate the product for customers who lack the in-house expertise. It’s not perfect, but it helps align the incentives towards simplicity.


I've been using Coolify on a Hetzner VPS, it works great, it's like an open source Heroku that works through Docker and Compose.


I had a look at it and it seemed interesting, but then I spotted the `-v /var/run/docker.sock:/var/run/docker.sock`.


In case anyone's wondering, that gives the container root level access to the host's Docker daemon. A big potential security hole.


It's also just generally wrong to build a scheduler on top of the docker API. We have CRI for a reason, because everyone knows Docker is not going to be around forever. Certainly not the company. Maybe dockerd.


> usually at a rate that's much cheaper than on-demand pricing

This is an area that is legitimate swindling/inflation by hosted app providers (e.g. DO Apps, Heroku, Render, Fly, etc). Oftentimes the per-vCPU/memory price is inflated over the underlying cost, even relative to a rather expensive underlying provider like AWS; which they'll reasonably justify by saying that this is the value-add, yeah you pay more but its more managed. But: when you have underlying access to the VPS, you can host more than one process! Which, of course, they're oftentimes doing on their end to cut costs.

Serverless functions can legitimately fall into the "always cheaper" category. If you've got twenty apps that each get request volume in the range of dozens-to-thousands per month; you could host that on a $5/mo VPS, or you could pay a few cents for a FaaS (Lambda, GCP, Cloudflare Workers, etc, all priced in the same magnitude). But the price-to-scale chart of serverless functions is really weird; they do hit a point where they're more expensive than just, you know, running a web server on a VPS. That point isn't at a super low volume, but its not far off from where its something to think about for a typical software organization. If I had a personal project that hit that point, I'd classify it as a good problem to have.

I also feel endless frustration in how there legit isn't a single cloud provider out there that (1) offers a serverless functions runtime, and (2) gives you the ability to set a non-zero budget which turns off the world when you go over budget. Many offer free tiers with no credit card, and some are even generous (Vercel and Firebase are two good examples), but I won't build on a free tier. I want to pay you. So, you upgrade and give a credit card, and now you're liable for one fuck-up bankrupting you, or throwing you on your knees at the mercy of their customer support. The severity of this fuck-up ranges from "my GCP account does just use a VPS, but egress is unlimited, so the bill was a bit high this month" to "the new dev changed a lambda function to call another which called the first one, and our bill is now the GDP of a developing nation-state".

The vast majority of the managed infrastructure world is, unfortunately, B2B swindlers just trying to out-swindle each other, only possible because they're all buying from each other, constantly raising prices, finding new ways to bill their customers, and losing any grip on the true (extremely low) reality of their costs. Supabase is better than most. I really do appreciate releases like this one. I'd also add Cloudflare to my list of "good ones"; they've taken a hard stance against charging for bandwidth, and I think that single decision had controlled a ton of the incremental costs we see from their newer higher-level product offerings like Workers.


I have stuck with cheap VPS servers for as long as I can remember. It takes 5 minutes to deploy a full stack node.js app, along with a database - and I've yet to exhaust the resources on my VPS, even with all my side projects (production grade and hobby stuff).

Have always found it weird how so many heroku-style hosting providers charge _per app_, things get costly, quickly, when you have lots of small apps like I do.

Just yesterday I realised I'll need a database to store job queues for https://chatbling.net - ChatGPT helped me figure out how to install it, have it persist to disk, ensure it starts up with the system etc. It's nice to not be fearful of unexpected charges hitting my card.

To anyone reading, even if it's just for learning, every now and then, skip the vercel/fly.io/netlify/planetscale/upstash/render/railway whatever cool hosting provider is out, and give a $5 VPS a try.


I think I want to do the same. Can you describe your stack please? How much downtime do you get and how do you deal with app updates and system updates to the vps machine itself? What about monitoring?


These PaaScost an arms and a leg and each one have their own DSL u have to learn.

Easiest is to just provision ur own vps and run a docker-compose or k8s


Who needs a DSL when you can just have a massively nested undocumented yaml file?


I’ve been using Cloudflare workers for a while now and I have to disagree.

There are entire classes of problems I no longer worry about with Workers and can just focus on building. My search history is a reflection of that. I’m no longer looking up “how do I put this thing in a Jails or container to limit exposure?” “how do I properly secure an SSH server?” “what is the magic incantation in my NGinx configuration to get an A+ SSL rating?”

I also spend less time thinking about ongoing maintenance, automating rotation of SSL certs, keeping system packages up to date, doing a dist-upgrade every few years, maintaining Terraform files to rebootstrap a server from scratch, thinking about hot/cold redundancy, etc.

With (some) serverless providers an absolutely massive slice of the responsibility of building a web application is pushed across the API and vendor boundary and is someone else’s responsibility.

For me at least, this is huge. I have a handful of clients that don’t have any full time engineering staff, and being able to push the cost of ongoing maintenance down is the only thing that allows them to afford building a custom application.


I'm afraid you're trying to make system administration look much harder than it actually is. As an example, adding good defaults to your nginx config is automated by certbot, or you could use caddy. You could run your apps statelessly by containerizing your applications or by simply writing Ansible playbooks and then not have to worry about upgrades - you simply deploy the application on the new server and spin down the old one.


> simply writing Ansible playbooks

As someone who has been doing this for some years now, it is not simple.

To be fair, you can write a simple Ansible playbook if Ansible isn't doing much for you. But if you're using Ansible to manage things which are themselves not simple (like "just install a Node runtime please") you are at the mercy of whatever shell script Ansible eventually ends up calling.

I've been through Ubuntu version updates and Ansible really didn't help with issues like "that package is now no longer in the PPA you got it from".

Administration doesn't take a lot of my time, but when I do need to do it, it can take a solid day of focus to make sure I know what I'm doing, make the changes, recover from the pitfalls I always forget (e.g. "this Ansible step fails in --check mode because it depends on a file that is created in an earlier step, which doesn't happen in check mode") then work through the inevitable issues. I wouldn't want to do it without Ansible, but it's not "simply" anything.


If you stick to shipping containers it gets rid of 99% of these problems at the cost of some extra storage for the N copies of runtimes. Then your base infrastructure is reduced to “something that runs containers” which can be anything from vanilla docker, to docker-compose, to one of the many diy-PaaS platforms, to a full blown k8s cluster.


I'm going to challenge you on this.

I've maintained large systems and small systems. FreeBSD and Linux systems. I helped build and maintain the serverless platform that hosts the Netflix API. I managed build systems and CI/CD for docker images deployed to k8s and built a package manager to address the instability that is inherent with rebuilding artifacts the way tools like Docker do (which are already a marked improvement over Ansible).

Ansible is essentially logging in and running a series of shell scripts. This works great in isolation, but do it long enough and you'll realize a lot of things you thought were idempotent, atomic, and infallible are not. Most package managers are glorified tarballs with shell scripts wired up to lifecycle hooks during install. You YOLO unpack them into a global namespace and hope for the best. With any luck, when something surprising happens, you can just rerun your script to bring the server back into a good state. But often times the server just ends up borked and you have to throw it away and start over.

K8S somewhat addresses this by maintaining the desired state in a declarative format and comparing the actual state against the declared state in an eval loop. But K8S is absolutely massive and unbelievably complicated. Most declarative systems are non-trivial. The closest I've seen our industry get to this ideal is Nix.

Linux itself is a beast, a reliable beast, but it's a chunk of software I don't think you can just wave your hand at and say "this is easy!" It's easy because it works. When it doesn't work, it's absolutely not trivial.

And this is the core of it: everything you just listed off that makes server administration easy has no delegation of responsibility. They are abstractions that you ultimately own. When they stop working, that's your problem. The Ansible project has no vested interest in the health of your server or the success of your CI/CD pipeline. They have no engineers standing by to help you bring your site back up. That's all 100% you even if you've pushed it down under the covers.

Compare that to my serverless deployments. I pay a vendor to be responsible for everything I possibly can, and everything I end up being responsible for I keep as minimal as possible. These deployments aren't mine, they are my customers'. My customers are small to medium sized businesses (for my fortune 500 contracts, I build the systems you're talking about and a whole lot more). A small to medium sized business can not maintain Ansible. They are mechanics, plumbers, drywallers, etc. They are not Linux System Administrators. And I'm not here to milk them for money, I want to get in, get done, and leave them with a stable system that requires minimal maintenance. I do that by having vendors lined up that are responsible for the system running below my software and those vendor's support contracts are a lot cheaper than my weekly rate.


I updated my post. I meant with "deploy as VPS" "deploy as a (virtual private) server with a service like render, heroku".

I agree the sys admin stuff, no thanks. the appeal of "serverless edge", at least to me, is ease of deployment. but, that ease of deployment is tooling / git integration rather than the underlying architecture.

the benefit of "your functions are globally distributed" has yet to appeal to me. for large ecommerce maybe. but then again, just deploy your servers in a distributed fashion which is "easy" if no state persistent is involved and tooling like fly.io is emerging.


to be honest I think much of this is also solved by thinking of VPS as "docker compose on a VPS" (or whatever container tech you want to use to abstract away the bare-metal sysadmin stuff).

I honestly think with containers there is much less need for stuff like render and heroku, even more so if you use SQLite and remove the hosted database complexity.

In fact in many ways it's even better - for instance the ability to run locally with an exact bit-for-bit production environment. Can't really do that with a PaaS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: