What I really wish for is a simple way of running docker containers (like AWS Fargate, or at least ECS), because I want to run docker containers across multiple droplets, but I don't want the full complexity of Kubernetes. Also something akin to auto-scaling groups. If DO had those, I'd use it a whole lot more than I do (currently I only spend use approx. $140/month on DO).
I just led a migration for my small team from Zeit Now to Render (https://render.com/). It has filled this need pretty well. There are some features that I wish existed but overall the simplicity has been great for our use case. They do not have auto-scaling but it's planned (https://feedback.render.com/features/p/autoscaling).
(Render founder) Glad to hear it, and I hope you've posted your feature requests on https://feedback.render.com. We're investing heavily in growing the team and putting a strong engineering foundation in place so we can keep adding new features quickly and reliably.
Render is more flexible than Heroku: you can host apps that rely on disks (like Elasticsearch and MySQL), private services, cron jobs, and of course free static sites. You also get automatic zero downtime deploys, health checks and small but handy features like HTTP/2 and automatic HTTP->HTTPS redirects.
It's considerably less expensive as your application scales: a webapp that needs 3GB RAM costs $50/month on Render; on Heroku you'll pay $250/month for 2.5GB RAM, and $500/month for the next tier (14GB RAM).
Repobus (https://www.repobus.com/) yet to be launched will provide this functionality in DO or any other cloud. Simply add your nodes and that is it. It is a Heroku like platform on your VPS. Launching sometime in April.
What about hosted kafka or hosted message queues? Are there any plans (even tentative) in that regards? That’s the other missing piece that would make a huge difference to me.
One final question: will spaces support proper static site hosting? There was a ticket about it stating it was planned for Q3 or Q4 last year. but there was no follow up.
Yeah, I quite like the look of it. I’m still scared of actually running it myself though since the master node requirements are a few servers. I guess I need to look into it again.
Their documented "minimum requirements" are quite ridiculous TBH. I mean, officially Vault requires 6 Consul servers (dedicated to Vault, mind you) to be considered ready for production. I doubt most companies using Vault with Consul follows this.
You I think you could be fine with 3 of the smallest machine types.
That’s basically what put me off trying nomad. Nit just the minimum number, but also their stated hardware requirements. From their documentation:
Nomad servers may need to be run on large machine instances. We suggest having between 4-8+ cores, 16-32 GB+ of memory, 40-80 GB+ of fast disk and significant network bandwidth
Basically the cost if the Nomad masters would be much greater than the cost of what would run my actual applications. That’s a non-starter for me.
> You I think you could be fine with 3 of the smallest machine types.
Maybe, but if that’s their official stance, it would make me very nervous to run a production system with lower spec machines.
They ( and many others ) better pray central banks keep printing money by the ton, because once the crunch comes these IT SUVs will be the first to go.
Maybe there's some element of "geek macho" from their side here, but this is their recommendation for a supporting a "small" workload, where I suspect you find yourself on the very low end of that.
Like, you're not doing "big data" unless we're talking petabytes per day.
It really depends on how what your workload is though.
We have been running between 100 and 200 Jobs in Nomad, with the quantity of clients doubling then shrinking every day using 3 × t3.micro for the servers since years.
We have yet to see our Nomad usage increase enough to get rid of these machines.