And there is truth to that.
Most deployments are at that level, and it absolutely is way more performant then the alternative. it just comes with several tradeoffs... But these tradeoffs are usually worth it for deployments with <10k concurrent users. Which Figma certainly isn't.
Though you probably could still do it, but that's likely more trouble then it's worth
(The 10k is just an arbitrary number I made up, there is no magic number which makes this approach unviable, it all depends on how the users interact with the platform/how often and where the data is inserted)
I've been working with rails since 1.2 and I've never seen anyone actually do this. Every meaningful deployment I've seen uses postgres or mysql. (Or god forbid mongodb.) It takes very little time with yours sol statements
You can run rails on a single host using a database on the same server. I've done it and it works just fine as long as you tune things correctly.
I don't remember the exact details because it was a long time ago, but what I do remember is
- Limiting memory usage and number of connections for mysql
- Tracking maximum memory size of rails application servers so you didn't run out a memory by running too many of them
- Avoid writing unnecessarily memory intensive code (This is pretty easy in ruby if you know what you're doing)
- Avoiding using gems unless they were worth the memory use
- Configuring the frontend webserver to start dropping connections before it ran out of memory (I'm pretty sure that was just a guess)
- Using the frontend webserver to handle traffic whenever possible (mostly redirects)
- Using IP tables to block traffic before hitting the webserver
- Periodically checking memory use and turning off unnecessary services and cronjobs
I had the entire application running on a 512mb VPS with roughly 70mb to spare. It was a little less spare than I wanted but it worked.
Most of this was just rate limiting with extra steps. At the time rails couldn't use threads, so there was a hard limit on the number of concurrent tasks.
When the site went down it was due to rate limiting and not the server locking up. It was possible to ssh in and make firewall adjustments instead of a forced restart.
I love k8s, but bringing back up a single app that crashed is a very different problem from "our k8s is down" - because if you think your k8s won't go down, you're in for a surprise.
You can view a single k8s also as a single host, which will go down at some point (e.g. a botched upgrade, cloud network partition, or something similar). While much less frequent, also much more difficult to get out of.
Of course, if you have a multi-cloud setup with automatic (and periodically tested!) app migration across clouds, well then... Perhaps that's the answer nowadays.. :)
> if you think your k8s won't go down, you're in for a surprise
Kubernetes is a remarkably reliable piece of software. I've administered (large X) number of clusters that often had several years of cluster lifetime, each, everything being upgraded through the relatively frequent Kubernetes release lifecycle. We definitely needed some maintenance windows sometimes, but well, no, Kubernetes didn't unexpectedly crash on us. Maybe I just got lucky, who knows. The closest we ever got was the underlying etcd cluster having heartbeat timeouts due to insufficient hardware, and etcd healed itself when the nodes were reprovisioned.
There's definitely a whole lotta stuff in the Kubernetes ecosystem that isn't nearly as reliable, but that has to be differentiated from Kubernetes itself (and the internal etcd dependency).
> You can view a single k8s also as a single host, which will go down at some point (e.g. a botched upgrade, cloud network partition, or something similar)
The managed Kubernetes services solve the whole "botched upgrade" concern. etcd is designed to tolerate cloud network partitions and recover.
Comparing this to sudden hardware loss on a single-VM app is, quite frankly, insane.
If you start using more esoteric features the reliability of k8s goes down. Guess what happens when you enable the in place vertical pod scaling feature gate?