Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's only around $10/hour, which doesn't actually buy one a whole lot of servers/databases/bandwidth/monitoring/logs/etc.


The reliability/uptime guarantees of the cloud providers are dubious, but in this case I don't think they even need to be discussed: this product makes no profit. No money is going to be lost if the thing goes down, because it already doesn't make profit. In fact, just keeping the thing up is making them lose money, so short of completely shutting it down, moving it to more cost-effective hosting would at least mean they can keep it going for longer on their donations.


>> which doesn't actually buy one a whole lot of servers/databases/bandwidth/monitoring/logs/etc.

It buys you 150 machines for a month 12 core 24GB VPS with unlimited traffic on a 1Gbps link

See my comment below.


And then you get to stand up your own databases, load balancers, monitoring, logging, etc. For which you need a development team with significant operations experience to do correctly - who will surely cost you more than $90k/year

I get it, AWS looks expensive, but a bunch of their foundational services are real force-multipliers if you don't have the cash to build out entire operational teams.


>> if you don't have the cash to build out entire operational teams

Its just not true that AWS doesn't need expensive experts to get stuff done - it really does.

Anyone who is half decent on the command line in Linux can get all those servers installed and running without the spaghetti complexity of AWS.

The cloud as a magical place of simplicity and ease of use and infinite scalability in every direction - I think its the opposite of that - AWS is a nightmarish tangle of complexity and hard to configure, understand, relate and maintain systems.

Its MUCH easier just to load up a single powerful machine with everything you need. I'm not saying that works for all workloads but a single machine or a few machines can take you an awful long way.


> Its MUCH easier just to load up a single powerful machine with everything you need. I'm not saying that works for all workloads but a single machine or a few machines can take you an awful long way.

For the core service I tend to favour monoliths too, but I would say you are vastly underestimating the halo of other crap needed to operationalise a real website/SaaS.

Where is your load balancer? Your database redundancy? Where are backups stored? Where are you streaming your logs for long-term retention? Where are you handling metrics/alarming?

Bare metal is great, but you have to build a ton of shit to actually ship product.


> Where is your load balancer? Your database redundancy? Where are backups stored? Where are you streaming your logs for long-term retention? Where are you handling metrics/alarming?

What we lose sight of, is that those things aren't as important as we, as SREs would like to think. When you're a corporation of one person, trying to stay afloat, you can just rely on a single big box and spend your time dealing with all the other problems first. Make sure you have an escape hatch so you can scale up if need be, but don't overengineer for a problem you won't run in to.


> Your database redundancy? Where are backups stored? Where are you streaming your logs for long-term retention? Where are you handling metrics/alarming?

Who cares? At this point it's a hobby project that makes zero profit and is bleeding money. No more money is going to be lost if they lose the DB tomorrow. No more money is going to be lost if they go down for an hour or a day or a week (in fact, they might _save_ money if they don't get more AWS charges during the outage).

They have nothing to lose, and about 6k/month to gain by moving to cost-effective hosting, which could actually make this a decent side-project.


Have managed all that in the past with bare metal and more (you forgot configuring routers, installing OS, managing upgrades, dealing with hardware swaps, etc). Its soooo much more sane to deal with that than AWS configuration, actually relatively easy for someone half competent. Luckily we can just employ people to mess around full time with AWS.


> And then you get to stand up your own databases, load balancers, monitoring, logging, etc.

You get all if not most of this on DigitalOcean, Linode, Upcloud, Scaleway, etc all of a LOT cheaper.

> but a bunch of their foundational services are real force-multipliers if you don't have the cash to build out entire operational teams.

No, it's not. As above and for a lot of things AWS' complexity and silly factor can make it even worse. In GCP I can setup a dual region bucket. As simple as that. In AWS I need to setup 2 buckets, a replication role, bucket policies, lifecycle policies and a lot more just to get the same. Force multiplier? As in make it slower? EKS takes longer than the default Terraform timeout to provision. The list goes on...


I think a lot of folks make their lives unnecessarily complicated by trying to do things on AWS in an explicilty non-AWS way (I inherited a startup codebase last year that did this to themselves in spades).

Why go to the trouble of running Kubernetes on top of AWS, when ECS does roughly the same job at a fraction of the complexity?

Why use Terraform when CloudFormation maps better to the underlying primitives?


> Why go to the trouble of running Kubernetes on top of AWS, when ECS does roughly the same job at a fraction of the complexity?

Because it doesn't. ECS has its own complexities - perhaps as a result of EC2 fleets / autoscaling groups and more. Suddenly you need launch templates and it goes on. Have you tried updating ECS via CLI? It's largely confusing.

> Why use Terraform when CloudFormation maps better to the underlying primitives?

If only. It has gotten better, but historically CloudFormation has a lot of missing features and still do. Cloudformation can get stuck for hours and you'd just have to wait. The cross region support is terrible. Not to say Terraform doesn't have its quirks, but it's definitely not "worse".


Vendor lockin. Knowledge transfer.

And I disagree that AWS is less complex. Managing services across AWS is complex, K8s is as well but I would rather manage K8S on bare instances.


Less vendor lockin.

My devex team has developed helm charts we can use that automatically detect EKS, AKS or gcp K8s and configure the parts of an app to work with each environment, but the end users of the helm chart don’t really have to care.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: