Hacker News new | past | comments | ask | show | jobs | submit login

Those numbers actually make the AWS instances a shoo-in for my current development purposes.

The cost of housing and managing a unit of hardware is nonzero. Actuals vary wildly by location, purposes, and sector; but if everyone can get their heads out of the hobbyist-tinkerer mindset for a moment and consider that lifetime TCO is a real and meaningful consideration for businesses, then the component that isn't buying/leasing the server itself is typically several hundred dollars, per physical unit, per annum. Public clouds don't nullify all of it, but they swing the needle dramatically.

Since my duty cycle for a Mac Mini is rather less than 20%, the economics even of on-demand instances immediately make sense, and having it inside the VPC boundary without hybridising is gravy on the meat since I'm consuming several other AWS services besides.

These two factors (lifetime TCO and scale-down) are the economic foundation of the value proposition of utility computing. The "I can build that in my garage for cheaper" crew aren't wrong, but they're missing the point.

Putting on my sometime cloud infrastructure product manager's hat, long-term observers of service pricing may also observe the common enough pattern of starting a new product with a relatively high price, one that selects for early adopters and other price-insensitive (and, you hope, glitch/MVP-tolerant) customer segments, then gradually ratcheting prices down in order to estimate (amongst other things) the price elasticity of demand. It's naturally easier to get cheaper over time, than go the other way. In this AWS's position on compute is congruent to any other commodity merchant⁽¹⁾ with the luxury of being able to withstand potential losses on a single product line. For most EC2 instance types I'd expect they have a very good model of the price sensitivities, but mac1 is undoubtedly a unique platform with elevated uncertainty in the price parameters.

Which is my long-winded way of saying, it'll likely be cheaper in a few months.

-----------------------

⁽¹⁾ yes, yes, just like a fruit shop, very droll.




> Since my duty cycle for a Mac Mini is rather less than 20%, the economics even of on-demand instances immediately make sense

MacStadium's prices are "rather less than 20%" of AWS's prices. To make sense, AWS would have to be comparable to MacStadium's pricing.

And don't forget the 24-hour minimum. If your duty cycle for a Mac Mini is only 20% per day every weekday, well, you can't rent a Mac Mini for less than a 24 hour period, so instead of paying for 20% of a month, you'll be paying for 20 work days, 60% of a month. On AWS, that would run you $546/month, 4x as much as you'd pay MacStadium for the entire month.

AWS's price is only comparable to MacStadium if you need five 24-hour periods per month (or less).

And, sure, AWS's prices will decline at some point, but I'm not expecting an 80% price drop this year.


This is what I mean by "hobbyist mindset": driving straight past the huge neon sign flashing TCO, assuming that some idiot (that's me) has failed arithmetic 101 (totally possible, my undergraduate mathematics ruined me for sums, isn't everything an infinite series?), and blithely disregarding the cost and time I'll need to obtain authorisation to establish a new commercial relationship, authorisation from legal and security to have our IP and/or customer data in yet another facility, then the overhead of managing the technical and billing elements, and the compliance reporting, and so on.

So yes, AWS is still a shoo-in.

Don't even get me started on the data sovereignty and latency issues of a self-proclaimed "global" hosting provider that's only on two continents, neither of which we are operating in.


AWS is available on all continents except Antarctica.

If you’re operating there, then your providers in all aspects will be more than limited.


That's rather the point. MacStadium is only on two continents.


I have an app I need to compile on a Mac that I only make changes to maybe once a quarter, or four days a year. That's an ideal use case for on-demand AWS Macs, and I'll probably transition to that if the Mac Mini in the corner ever decides not to boot or not to accept a mandatory OS update when I have to use it someday.

Also, I'm more likely to transition to this because I'm familiar with AWS, I didn't know MacStadium existed before this thread.

If that's their target market (and not someone who wants a scalable Mac build farm available 24/7 and doesn't care about costs), then it makes sense - I pay a lot less than the price of a Mini to rent it 4 days a year, if Amazon can find 90 other people who want to do the same, all of us are better off.


If you haven't you may want to consider https://www.macincloud.com/ where you can rent it for $1 per hour when you need. Your use case seems very suitable for that. It is roughly same price point as AWS but can stretch your dollars longer vs AWS 24 hour upfront.


If you’re not testing/using it the rest of the time, doesn’t that affect the quality? Dog-fooding is usually a better idea, right?


Sure, but then I'd have to use an iPhone. The machine is the main thing to my customers, the app is just a tool for monitoring its status and the Android app and webserver shows the same information.


> MacStadium's prices are "rather less than 20%" of AWS's prices. To make sense, AWS would have to be comparable to MacStadium's pricing.

Not at all. MacStadium is not inside the AWS network, so there’s nothing comparable about it.

I’m not sure why they went for a 24h minimum though. That defeats the entire point of cloud computing for me.


That 24 hour minimum is based on Apple license agreement.


I'm no expert, but I'm stunned that it costs more than $10,000 a year to connect a single computer to a corporate network such that AWS Macs become cost competitive. I've never worked in a big enterprise organization but I just can't wrap my head around the inefficiency required to make that math work.


I work in a large company.

The problem is not the first time setup. The problem is essentially what “uptime” you can provide. Once you setup an infrastructure many people come to depend on it.

Lets say two teams lose half a day when that CI/CD mac mini goes down then paying for a cloud provider to manage many instances and “sharing them ondemand” makes a lot of monetary sense.

Of course, paying a cloud provider may or may not make sense for hobbyists. Personally, I pay for cloud storage - thats super hard to get right.


For what Amazon is asking, one could have both an extra hot _and_ an extra cold standby locally, so it still doesn't make economic sense for a large company.


In a very substantial number of enterprises; the cost of procuring, validating, installing, powering, cooling, fire-rating, commissioning, maintaining, writing up, handing over, revising, reporting, auditing, approving, and very occasionally actually executing those hot<>cold standby procedures, exceeds the entire purchase cost of the cold standby unit.


Hell, the company will spend more money on agonizing over the question of whether we should buy a $1500 Mac in the first place.

Besides, if your cloud spend is already a fuckton, $24 per day extra is not going to make you blink.


for real. I used to work at a large company in highly regulated industry with lots of scrutiny on our books. The number of meetings and program proposals it would take just to get finance to agree to the capex of standing up a dozen mac minis would easily dwarf the hardware cost. That's before you get to the actual work. From my experience, saying "we're just going to increase our EC2 spend" is a no-brainer


I could see it being reasonable if a medium to large companies entire infrastructure was cloud based, and they only needed a handful of mac minis. Alternatively, a company who needed them only a handful of days a year for some reason I can’t immediately think of.

Otherwise I agree. 24 hour minimum removes immediate scaling as an option, and for anyone who already has their own infrastructure the cost is prohibitively high. My hope would be that with time, at the very least the 24 hour minimum would eventually be removed, and ideally the price will come down too.


a company who needed them only a handful of days a year for some reason I can’t immediately think of

Perhaps it's good for render farms. Instead of building a giant render farm that sits idle between productions, you spin up a render farm only when you need one.


Aren't most renderers compatible with Linux or Windows, where you can get much larger instances?


I don’t understand how you are confused. If I need to check if my app works on m1, I can now spend $24 to do that instead of $650 or whatever it retails for.


So these are for people who want to do quick compatibility testing? For that use case, how does it not make more sense to go with MacStadium? If you end up needing to test for two days instead of one, you're already in the pricing territory where you could have gotten a full month of usage from MacStadium.


> how does it not make more sense to go with MacStadium?

If you're already in AWS, your deployment infrastructure and tooling is already configured and tuned for AWS. Deploying a Mac EC2 instance is just adding another target. Whereas deploying to another provider is an unknown amount of work to integrate with their API (if they even have one). The wall-time cost of people's time and effort vastly outweighs the marginal savings you'd get by going with a low-cost provider.


I will spend more time trying (with no great chance of succcess) to obtain authorisation to establish an additional commercial relationship, and corporate billing/card approval, and the time burned on this instantly exceeds any amount of money they could possibly save me.


You can check in one day, one shot, if your app "works" on an M1 and that's that? Yikes.


Best case scenario, that's 3 work days. If I need more, that's a dollar an hour.


It's a lot easier to click a button on the AWS console then to get approval to buy new capital equipment, or to get approval to use a new vendor (for which we'd likely need legal review, security audits, etc.).


It would be cheaper if you had your own data center already or an office space sufficiently large to store these Mac Mini's. But if you have neither and are operating out of say AWS, then it becomes more expensive to rent space to put these Mac Mini's into, especially if you don't need it all the time. In that way it can actually be "cheaper", because it's compared to the cost of building out the infrastructure required to support your own hardware.


I'm fascinated by this thing where people tacitly assume that employee time is free. Even people who like to actually receive their paychecks.

The money spent paying someone to set up a Mac server somewhere else, getting that somewhere else connected to your AWS networks, and ensuring that connection is secure, and generally integrating it into your existing cloud devops infrastructure (or, perhaps more expensively, not integrating it into your existing devops infrastructure), would, I'm guessing, be horrendously expensive compared to the premium Amazon charges on a relatively turnkey solution for use cases such as, "We need to run the occasional Darwin build."


I've maintained mac mini farms and it is both time consuming and expensive. If it were reasonable to setup and maintain Macs for devops, these offerings probably wouldn't exist.


They would still exist. When a company's entire shop is set up in AWS they want to keep it that way, not buy physical hardware to install in their office, create a VPN to make build pipelines, worry about physical security, etc.


Yeah, the premium here would be easily justified for us in having a mac colocated with the rest of our infra and it being easy to provision and manage through CloudFormation/APIs just like the rest of our infrastructure.


> Since my duty cycle for a Mac Mini is rather less than 20%, the economics even of on-demand instances immediately make sense,

Are you factoring in the 24 hour minimum AWS charges for a Mac Mini?


If you are comparing buying the device + the other costs of running the device vs renting from aws, then the 24h upfront is not an issue. You could even consider renting for a year or more.


Certainly it's going to depend on your use case.

On my team, we just have a Mac Mini sitting under someone's desk to do stuff that requires a Mac. It gets used for about 1 hour every day for a daily build. For us, AWS's offering doesn't make financial sense.

Even if we were to switch to only doing a weekly build, it wouldn't be worth it, as that would still cost around $1,250/year.


Yes. The details don't particularly matter, but broadly, it's periodic QA of a couple of apps & tools vs Safari, vs OSX, and vs homebrew. Not CI/CD.

OSX seems more prone to bitroot regressions than Windows or Linux, I suppose Apple have a "you're with us, or you're against us" attitude and this even applies to base OS backwards (in)compatibility.


> Since my duty cycle for a Mac Mini is rather less than 20%, the economics even of on-demand instances immediately make sense,

What are you doing with macs that naturally works out at ~one continuous 24h period per work-week on average? Turn on CI once a week Thursday midnight, and no one goes into the weekend before all test failures are fixed?


Maybe start the Mac CI monday morning, and fix errors then? Would lead to a more enjoyable weekend IMHO.


Ya, weekly builds on Monday morning for a gaming company with multiple teams makes sense. They could slice it up across their teams even for better TCO


> but if everyone can get their heads out of the hobbyist-tinkerer mindset for a moment and consider that lifetime TCO is a real and meaningful consideration for businesses

This is the hard thing for me. I just like having so much control of having hardware at home, and I've got static IPs. But it costs to power and cool them, and they aren't elastic like cloud/IaaS. Having started the move to VPSs and finding them very good for the price, I'm now trying to wrap my head around "cattle, not pets", but still with a micropreneur mindset. I think it's still worth it, and my home office is getting quieter with every server I relocate to somebody else's hardware.


I'm the same way as you. I love my hardware.

But the real place cloud/IaaS shines for me is when you aggressively scale, both up and down. If you're used to pets and physical machines then you probably over provision a lot. With the cloud you can ride the CPU/memory line a lot closer and scale quickly when needed.


Ok, I understand how just looking at the cost of a box of Mac mini in the Apple store doesn't represent the TCO...

...however is the AWS price including a lot of value the Mac mini colo price doesn't? I believe the AWS price is dramatically higher unless you can batch the 20% duty cycle into full days (AWS is following Apple's EULA and only billing in 24 consecutive hour chunks).

I'm not a "I can build in the garage cheaper", I'm a "we use to build servers that way, run them ourselves (and steal meeting rooms form ourselves and put extra cooling in), and then I went off to do other things for 20 years and now I work at a huge company where we build our own cloud stuff...but if I were at a less huge company how would this stuff actually work out?" kind of person (i.e. I know the _theory_ of renting other people's servers, but the finer points of actually evaluating it are not currently in my grasp).


The thing with scaledowns is that scaledowns usually never happens. It also costs money to build scaledown logic and build around elastic resources.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: