Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It seems like the last 4-5 years though have really made it super common again. Bubble maybe?

Cloud providers have successfully redefined the baseline performance of a server in the minds of a lot of developers. Many people don't understand just how powerful (and at the same time cheap) a single physical machine can be when all they've used is shitty overpriced AWS instances, so no wonder they have no confidence in putting a standard RDBMS in there when anything above 4GB of RAM will cost you an arm and a leg, therefore they're looking for "magic" workarounds, which the business often accepts - it's easier to get them to pay lots of $$$$ for running a "web-scale" DB than paying the same amount for a Postgres instance, or God forbid, actually opting for a bare-metal server outside of the cloud.

In my career I've seen significant amount of time & effort being wasted on workarounds such as deferring very trivial tasks onto queues or building an insanely-distributed system where the proper solution would've been to throw more hardware at it (even expensive AWS instances would've been cost-effective if you count the amount of developer time spent working around the problem).



Just to give a reference for those that don't know, I rent a dedicated server that has 128gb of ram and 16 core processor (32 threads) and 2tb of local SSD storage and virtually unlimited traffic for $265 USD a month. A comparable VM on AWS would be around $750 a month (if you reserve it long term) and then of course you will pay out the nose for traffic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: