Hacker News new | past | comments | ask | show | jobs | submit login

and a nice Sunday afternoon



And lots of sales on my case


And the illusion of superiority over non cloud offerings.


I keep trying to explain to people that our customers don’t care that there is someone to blame they just want their shit to work. There are advantages to having autonomy when things break.

There’s a fine line or at least some subtlety here though. This leads to some interesting conversations when people notice how hard I push back against NIH. You don’t have to be the author to understand and be able to fiddle with tool internals. In a pinch you can tinker with things you run yourself.


> I keep trying to explain to people that our customers don’t care that there is someone to blame they just want their shit to work. There are advantages to having autonomy when things break.

There are also advantages to being part of the herd.

When you are hosted at some non-cloud data center, and they have a problem that takes them offline, your customers notice.

When you are hosted at a giant cloud provider, and they have a problem that takes them offline, your customers might not even notice because your business is just one of dozens of businesses and services they use that aren't working for them.


Of course customers don't care about the root cause. The point of the cloud isn't to have a convenient scapegoat to punt blame to when your business is affected. It's a calculated risk that uptime will be superior compared to running and maintaining your own infrastructure, thus allowing your business to offer an overall better customer experience. Even when big outages like this one are taken into account, it's often a pretty good bet to take.


What does NIH stand for?


Not Invented Here


How come?


The small bare metal hosting company I use for some projects hardly goes down, and when there is an issue, I can actually get a human being on the phone in 2 minutes. Plus, a bare metal server with tons of RAM costs less than a small VM on the big cloud providers.


> a bare metal server with tons of RAM costs less than a small VM on the big cloud providers

Who are you getting this steal of a deal from?


Hetzner is an example. Been using them for years and it's been a solid experience so far. OVH should be able to match them, and there's others, I'm sure.


Hetzner is pretty excellent quality service overall. OVH is very low quality service, especially with the networking and admin pane.


hetzner.de, online.net, ovh.com, netcup.de for the EU-market.


Anywhere. Really.

Cloud costs roughly 4x than bare metal for sustained usage (of my workload). Even with the heavy discounts we get for being a large customer it’s still much more expensive. But I guess op-ex > cap-ex


Lots of responses, and I appreciate them, but I'm specifically looking for a bare metal server with "tons of RAM", that is at the same or lower price point as a google/microsoft/amazon "small" node.

I've never seen any of the providers listed offer "tons of ram" (unless we consider hundreds / low thousands of megabytes to be "tons") at that price point.


I've had pretty good luck with Green House Data's Colo Service and their Cloud offerings. A couple of RU's in the data center can host 1000's of VM's in multi-regions with great connectivity between them.


Care to name names? I've been looking for a small, cheap failover for a moderately low traffic app.


In the US I use Hivelocity. If you want cheapest possible, Hetzner/OVH have deals you can get for _cheap._


I've a question that always stopped me going that route, what happens when a disk or other hardware fails on these servers? beyond data loss I mean, like physically what happens who carries out the repair how long does it takes


For Hetzner you have to monitor your disks and run RAID-1. As soon as you get the first SMART-Failures you can file a ticket and either replace ASAP or shedule a time. This happened to me a few times in the past years it always has been just 15-30m delay after filing the ticket and at most 5 minutes downtime. You have to get your Linux stuff right through i.e. booting with a new disk.

If you don't like that you can order a KVM-VM with dedicated cores at similiar prices and the problem is not yours anymore.


Most bare metal providers nowadays contact you just like AWS and say "hey your hardware is failing get a new box.". Unless it's something exotic it's usually not long for setup time, and in some cases just like a VM it's online in a minute or two.


thanks!


Thanks a million. Those prices look similar to what I've used in the past, it's just been a long time since I've gone shopping for small scale dedicated hosting.


You weren't kidding, 1:10 ratio to what we pay for similar VPS. And guaranteed worldwide lowest price on one of them. Except we get free bandwidth with ours.


There are some whole argue that the resiliency of cloud providers beats on prem or self hosted, and yet they’re down just as much or more (GCP, Azure, and AWS all the same). Don’t take my word for it; search HN for “$provider is down” and observe the frequency of occurrences.

You want velocity for your dev team? You get that. You want better uptime? Your expectations are gonna have a bad time. No need for rapid dev or bursty workloads? You’re lighting money on fire.

Disclaimer: I get paid to move clients to or from the cloud, everyone’s money is green. Opinion above is my own.


Solutions based on third-party butts have essentially two modes: the usual, where everything is smooth, and the bad one, where nothing works and you're shit out of luck - you can't get to your data anymore, because it's in my butt, accessible only through that butt, and arguably not even your data.

With on-prem solutions, you can at least access the physical servers and get your data out to carry on with your day while the infrastructure gets fixed.


Any solution would be based on third parties, the robust solution is either to run your own country with fuel sources for electricity and army to defend the datacenters or rely on multiple independent infrastructures. I think the latter is less complex.


This is a ridiculous statement. Surely you realise that there is a sliding scale.

You can run your own hardware and pull in multiple power lines without establishing your own country.

I’ve ran my own hardware, maybe people have genuinely forgotten what it’s like, and granted, it takes preparation and planning and it’s harder than clicking “go” in a dashboard. But it’s not the same as establishing a country and source your own fuel and feed an army. This is absurd.


Correct. Most CFO's I've run into as of late would rather spend $100 on a cloud vm than deal with capex, depreciation, and management of the infrastructure. Even though doing it yourself with the right people can go alot further.


The GP's statement is about relying on third parties, multiple power lines with generators you don't own on the other side falls under it.

Fun related fact: My first employee's main office was in former electonics factory in Moscow's downtown powered by 2 thermal power stations (and no other alternatives), which have exact same maintenance schedule.


Assuming you have data that is tiny enough to fit anywhere other than the cluster you were using. Assuming you can afford to have a second instance with enough compute just sitting around. Assuming it's not the HDDs, RAID controller, SAN, etc which is causing the outage. Assuming it's not a fire/flood/earthquake in your datacenter causing the outage.

...etc.


Ah, yes, I will never forget running a site in New Orleans, and the disaster preparedness plan included "When a named storm enters or appears in the Gulf of Mexico, transfer all services to offsite hosting outside the Gulf Coast". We weren't allowed to use Heroku in steady state, but we could in an emergency. But then we figured out they were in St. Louis, so we had to have a separate plan for flooding in the Mississippi River Valley.


Took me a second.

I didn’t know the cloud-to-butt translator worked on comments too. I forgot that was even a thing.


Oh that’s weird, because it totally worked for me with “butts” as a euphemism for “people”, as in “butt-in-seat time” — relying on a third-party service is essentially relying on third party butts (i.e. people), and your data is only accessible through those people, whom you don’t control.

And then “your data is in my butt” was just a play on that.


I keep forgetting that I have it on, my brain treats the two words as identical at this point. The translator has this property, which I also tend to forget about, that it will substitute words in your HN comment if you edit it.

But yeah, it's still a thing, and the message behind it isn't any less current.


There is a cloud I've developed that is secure and isn't a butt :P

https://hackaday.io/project/12985-multisite-homeofficehacker...

I made IoT using cheap (arduino, nrf24l01+, sensors/actuators) for local device telemetry, MQTT, node-red, and Tor for connecting clouds of endpoints that aren't local.

Long story short, its an IoT that is secure, consisting of a cloud of devices only you own.

Oh yeah, and GPL3 to boot.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: