I'm not really sure why this should be a surprise. Just as big companies use a mixture of owned and rented real estate for various reasons, the same is true of other large expenses. If you have a large, predictable, core workload it makes sense to bring it in house, and use the elastic (rented) resources for unpredictable stuff (e.g. new efforts).
It's not like Amazon doesn't know this too (e.g. they try to make the transition harder by offering lots of proprietary services that help you ramp up faster, but can't be used elsewhere)
Agreed - past a certain (probably fairly massive) threshold of predictable long term load, it seems clear renting resources is going to be more expensive than the all-in cost of owning and operating your own.
Were the opposite true, you'd wonder about the economics of the cloud business model.
I think a more interesting way to look at this is, are the other advantages of cloud services (e.g. lots more flexibility and bundled proprietary technologies that you can't economically build yourself) actually worth the extra money?
Yes, by not using a cloud service you might save $2B a year, but does that cost you the opportunity to make even more than that, given you're probably moving slower or at least less efficiently than you otherwise could?
> but does that cost you the opportunity to make even more than that, given you're probably moving slower or at least less efficiently than you otherwise could?
The potential opportunity of fast-moving cloud features needs to be weighed against the opportunity costs of slow-moving cloud features. Where bespoke solutions can immediately provide tailored performance and maximize technical capabilities, a missing feature in any of your cloud providers services can be a showstopper or unmitigatable roadblock. And while lots of the technology is past the bounds of reasonable economical replacement, some of the technologies being shared through the cloud are nigh unfathomable to recreate.
Which is to say that the black ju-ju behind Windows update probably takes making a new MS to build up to present maturity, and unless you're a certified "big boy" letting small teams somewhere else fully dictate what you can and can't do at service boundaries probably impacts you in the long run.
Based on that, and IMO/IME: the answer isn't a binary choice but a constantly shifting point on a spectrum between the two, where on-premise/local-cloud and remote-cloud services are aware of one another and maximize capabilities while minimizing costs. Hybrid installations are just stronger, and are easier to reshape according to costs.
There are plenty of reasons to choose a point in the space denominated by the axes make/adapt-existing X insource/outsource X proprietary/open . None are zero cost unless you really think your platform is good enough that enough others will write for it without you doing anything.
It's possible that you would chose to write your own because nothing in the market is even close to good. It's possible that you would open source it in the hopes that it becomes standard and that others will add new features that you will benefit from too (this is not a zero-cost option either). It's possible that you would even keep it proprietary because that's cheaper. But given the way Amazon operates it's pretty clear that they offer their own proprietary services to lower the barriers to adoption ("simply offer a better option") and raise the barriers to switching.
I don't see anything wrong with this strategy legally or morally, though I mostly avoid amazon proprietary tools because the risk of locking is one I'm not willing to embrace. There are of course situations where they aren't a problem (e.g. a tool with an anticipated finite lifetime: the cost of lock in in that case is essentially nil).
It's not like Amazon doesn't know this too (e.g. they try to make the transition harder by offering lots of proprietary services that help you ramp up faster, but can't be used elsewhere)