I have two conflicting responses to what I am seeing here ...
First, this is awesome. Just like I want to live in a world where people are paying picodollars for cloud storage[1], I also want to live in a world where a bunch of mac pro cylinders are racked up in a datacenter. Very cool.
Second, this is complete silliness. I'm not going to go down the rabbithole of flops per dollar, but there is no way that you can't build a hackintosh 1U with dual cpus and multiple GPUs and not come out big money ahead. Whatever management overhead gets added by playing the hackintosh cat and mouse is certainly less than building new things out of sheet metal.
Let me say one other thing: right around mid 2000 was when certain companies started selling fancy third-party rack chassis gizmos for the Sun e4500, which was the cadillac of datacenter servers at the time. Huge specs on paper, way underpowered for the money they cost ($250k+) and the epitome of Suns brand-value. And there were suddenly new and fancy ways to rack and roll them.
This reminds me a lot of that time, and that time didn't last long...
Obviously, but running OSX on non-Apple hardware is a violation of its EULA.
I have contacted a lawyer for this (I wanted to run Hackintosh in the office), the language is very clear. The author of the software has the full power to license its use to you with any restrictions they find necessary no matter how ridiculous. If Apple only sells you the license if you promise not to run it on a thursday, you'll be in violation of their terms if you run it on a thursday.
> you'll be in violation if you run it on a Thursday
Indeed, there is a JS library opensourced by Microsoft to decode Excel files, named xlsx.js. In the license, it is written... that it cannot run on another OS than Windows. It means even though it's Javascript, the page hosting it cannot be viewed on a Mac or Linux.
Long story short, Stuart Knightley created a clean room implementation named js-xlsx to do the same thing, without the lawyer string attached.
This is correct, and a huge impetus for our use of Apple hardware. We simply cannot risk our business on saving some money at the expense of violating Apple's EULA.
> We simply cannot risk our business on saving some money at the expense of violating Apple's EULA.
I believe people are questioning that Apple hardware/software is a requirement of your business (and that it's not "some money", but a lot of money you'd be saving).
It's difficult to fathom Apple hardware/software being a hard requirement to operate any business (as-in you can't operate without it). Both Windows and Linux have a plethora of image utilities, audio, etc...
Sure, OSX might have some optimized image processing stuff, but couldn't the massive savings be used to scale wider with more generic hardware and still come out ahead?
Not at the moment. The cost overhead to OS X is minimized by my approach to datacenter design and operation, and the benefit of OS X's image processing software stack is maximized by the imaging engineers on staff at imgix who are making the most of it.
The math may not work out this way forever, and when it doesn't, we'll make a change.
Apple gave up on the server market. They don't care about number-crunching or scientific computing. To build a server-side based business around OS X doesn't make sense to me. Did you try writing your image processing code for Linux and CUDA / OpenCL? Is there anything specific about the OS X frameworks which means you can't develop a non-OS X solution?
I can't find that for EULA's proper (would surprise me, too, as that would allow anyone to pirate any shrink-wrapped software), but EULAs cannot prohibit selling your license: that was upheld for software bought by download, too: http://curia.europa.eu/jcms/upload/docs/application/pdf/2012...:
"Where the copyright holder makes available to his customer a copy – tangible or intangible – and at the same time concludes, in return form payment of a fee, a licence agreement granting the customer the right to use that copy for an unlimited period, that rightholder sells the copy to the customer and thus exhausts his exclusive distribution right. Such a transaction involves a transfer of the right of ownership of the copy. Therefore, even if the licence agreement prohibits a further transfer, the rightholder can no longer oppose the resale of that copy"
You can even buy the right to download future updates:
"Therefore the new acquirer of the user licence, such as a customer of UsedSoft, may, as a lawful acquirer of the corrected and updated copy of the computer program concerned, download that copy from the copyright holder’s website."
Afaik, if the EULA is only shown after the sale has already taken place, it is basically meaningless around here. But IANAL and only relaying what I heard on the internet.
The lawyer I spoke to was Dutch. There is some unclarity about EULA's sold to consumers, I think the general idea of the consumer protection is that the consumer should have seen the EULA before they paid for the product.
That said, it says on the box that the software is only for Apple hardware, and I think even only as an upgrade for an existing OSX install.
If you're a company, then EULAs are definitely binding, no matter where you are.
Non-Apple hardware does not differ in any meaningful way from Apple hardware. The performance of anything in OSX is perfectly tuned for any generic desktop computer. It also supports most hardware straight from the box.
Any generic desktop computer with the same hardware. It sounds like they're using Apple's image pipeline, which I imagine would be designed around the specific graphics hardware in the Mac Pro. Sure it could work on other hardware, but when you know exactly the hardware you're running on you can do a lot of low-level optimizations you couldn't otherwise do.
Apple supports Intel, AMD, and NVidia GPUs. Their graphics pipeline has over the years supported substantially all of the graphics chips produced by those vendors in the past 10-15 years. Their current full feature set may only be supported on GPUs that admit an OpenCL implementation, but that's still every bit as broad as the generic desktop computer GPU market—about two microarchitecture per vendor. Apple's not getting any optimization benefits from a narrow pool of hardware, for GPUs or anything else. The only benefit they get along the lines of narrow hardware support is that they don't have to deal with all the various motherboard firmware bugs from dozens of OEMs.
(I'm the datacenter manager at imgix, and I wrote the article)
I've alluded to this elsewhere, but the math doesn't add up to your gut reaction. It's cheaper, but not by a significant enough margin relative to the engineering costs, to go with commodity servers and GPUs.
Building things out of sheet metal is actually easier than migrating to Linux, for one big reason: we can pay someone else to do it, because it isn't part of our core competency. In fact, I'm pushing to open source the design of this chassis, in tandem with our design partner (Racklive). Not sure if it will happen, but I'd love to see it.
It strikes me that OS X is a solid server platform. Linux is always a more flexible choice, but if OS X works for you, it works for you. That being said, is sticking cylindrical mac pros on their side into square racks really the best solution.
There are 2 problems I see with this design:
1: You are placing the Mac Pros on their side, which may lead to premature bearing failure on the main cooling fan. Apple designed the cooling fan to be as silent as possible, which means that they optimized the bearing and the fan to work in vertical orientation. Bearings designed for thrust (vertical) orientation may not work so well if placed horizontally for a long time.
2: You are fitting triangular shaped computers, wrapped into round cases, into square shaped boxes, resulting in significant loss of space density.
Considering that Apple is a huge company that owns huge data centers, combined with the fact that it would be simply stupid for a company who makes their own OS to run anything but that OS, and combined with the above mentioned problems with using Mac Pros as server "logs" (because you cant call them blades), I would assume that Apple has internally OSX servers designed in the traditional blade configuration.
They may not sell or advertise them, but they MUST have them. Given that you guys are buying a ton of hardware, and are located nearby, and would be actively promoting running Apple hardware, wouldn't it be wise to at least approach Apple and see if they would be kind enough to sell you some of those blade form factor servers they simply must have.
I may be completely wrong here, but apple did brag about how Swift is the new language thats so flexible that you can make a Mobile app in it, or a Desktop app, or even a full blown social network. If that's the case, they must have some plans for the server market? No?
Any way, in the end it's a cool design, but I would seriously consider at least stacking the Mac Pros vertically to avoid fan issues. You can actually get a tighter form factor that way as well, unless space is not the issue. And if it's not, then hell, what's wrong with just placing a bunch of Pros on an Ikea shelf in a well air-conditioned room :)
1. That's certainly a possibility, and one that we won't really have hard numbers on for some time to come. However, the fan is about a $60 part, so provided that we don't have coordinated, catastrophic failures and that they live for at least a year, we're doing alright. Do note that Apple specifically says that the Mac Pro may be operated on its side. https://support.apple.com/en-us/HT201379
2. True, but 1U per server is not bad density by any stretch. For my app servers, they effectively occupy 0.5U; database and storage effectively occupy 1U. So this puts the Mac Pros on par with the larger server class. Were we to deploy renderers in conventional server chassis, a similar system would occupy at least an effective 1U if not a full 2U.
What Apple does internally is, of course, shrouded in mystery. I know some people there, and we talk to people when we can, but they just aren't the kind of company that is going to tell you how they make the sausage.
From what I've heard and my sense from speaking with them over the years, they do not use OS X in production. They used to use Darwin and Solaris, and now almost exclusively use Linux (presumably Solaris is still around to run Oracle). They did used to use Xserves internally, but even at their scale it isn't worth building them just for their own use.
Fascinating, I had no idea Apple approves using the Mac Pros on their side. It would be interesting to find out what happens with the fans.
It's also fascinating that they are running Linux internally nowadays, for their server side stuff. What next, I find out that all of the Microsoft data centers run Debian :) Considering that they employ all of those Objective-C and Swift engineers, you would thing that they would want to leverage their workforce write Obj-C or Swift backend code as well. For most backend tasks either Swift or Obj-C is as good of a language as any other.
Any way, rackable OS X systems are a missed opportunity for Apple. They can sell them to a company like yours, movie production houses, and even design some libraries and make a play for the web app market with Swift. Not sure how successful the last one would be. As for the economies of scale, they don't even need to manufacture or design the system, take an off the shelf rack mount server from another manufacturer, fiddle around with the casing a bit to give it that Apple feel, and load OSX on it. Perhaps the margins in the server side hardware are way too slim.
I feel quite lucky to have got a few before they stopped selling them. I have three dual-cpu xServes still running as our main app servers and they've been some of the most reliable boxes we have.
And I don't believe that they only started using unix operating systems not their own "nowadays" - when OS X was too immature during development and its long maturation what did you think they were using?
I thought they were using NeXTSTEP, hence all the NS API calls in Obj-C. Back in 1989 I am guessing NeXT would be built on some kind of Unix system first. Considering that OS X is a descendent of NeXT, I would think that before OS X, they would use it to run code, servers, etc.
"combined with the fact that it would be simply stupid for a company who makes their own OS to run anything but that OS"
Why would you assume that? There are a ton of things that linux does better than OS X - and it would be extremely stupid for any company regardless of size to not use the right tool for the job. For example, even IBM uses, sells, and supports Linux instead of AIX or OS/360 on their line of servers and mainframes. I think that your assumption is just really old fashioned.
Internally Apple does use Linux, just as Microsoft uses a blend of OS's - supporting Linux on Azure, for example. I read that they actually use Linux as a host for their Hadoop service on Azure.
At it's core OS X is Unix. In what way would Linux be a better choice? I am not saying that Linux is a worse choice, but for a company that writes an OS as one of it's core businesses, it only makes sense to run that OS in as many places as possible. For one, by running OS X as a server OS they would necessarily spend more time on development and improvement of the OS core. This would pay off in the long run by further improving stability and reliability of OS X.
I am not arguing that OS X is the perfect solution in most circumstances, but it can be a good solution in many situations, especially if you are Apple, and have the full source and the capability to adopt the OS as necessary.
Microsoft, especially nowadays, tries to be very cross compatible, so it's not surprising that Azure supports Linux apps and guests. But Azure RUNS on Windows Server 2008, not Linux, not Unix.
Because it isn't really about the OS, it's about the software. OS X is fine as a server platform, but it doesn't have the same software and support ecosystem for data center usage. Apple dumped that market with the Xserve because it didn't work for them.
Red Hat/Suse/Oracle etc. all sell tailored solutions for that usage that are Linux specific technology (mostly, some stuff gets ported to other Unix derivatives but most doesn't). Sure Apple could do all that too, but they don't want to. It isn't their market, so why sink money and effort into engineering OS X to do it when they can just buy high quality products ready to go?
What tools is OS X lacking? From my experience most of the development and server tools are available natively on OS X. It lacks support for containers, but that would be a worthy addition, and I would say worth spending money and time on. The rest is already there for the most part. Developing further their server infrastructure would allow Apple to make a play for the corporate market. Any way, it's a silly argument. I thought they ran most of their backend on OS X, it looks like was wrong.
It's not small server stuff like Apache that they are missing. It's stuff like distributed failover, exotic driver support, SAN, management etc. that they are missing. Big data center stuff, the kind of thing companies like Red Hat make.
Those kinds of products are huge investments. Sure Apple might be able to market towards the enterprise, but they simply don't think there is any money to be made. They used to have for instance Xserve that tried to stay afloat in that market, but which made little money. Since they canceled it, Mac OS has only been developed as a small to medium server (which it isn't half bad at). But big time data centers are a different world.
For instance, as a very basic example, does Mac OS support Infiniband or the more exotic high-speed ethernet network interfaces? For Infiniband, the answer is no and in the other case the answer is "kinda, but not really."
My background:7 Xserves still in production here in K-12 education, 1000+ users in OpenDirectory
In the pipeline:Migrating to the new shiny Mac Pros along with OS X Server
Reasons: Thunderbolt 2 connectivity is amazing and works fine to connect FibreChannel RAIDs.
OS X Server: Though it's correct that the GUI got simplified a bit, it's the same server package and complex as it always has been, however easy enough to support. And if configured correctly, a solid workhorse for many scenarios: network accounts for lab use, calendar and contacts server, along with some helper tools it works in heterogene environments fine, supports huge amounts of users in via LDAP..just to name some reasons. for 20 bucks the best server os to support Mac and iOS clients. And because the underlying foundation is UNIX, it's friendly with any networking stuff such as RADIUS for your WP2-Enterprise wi-fi needs..just to name a view.
One thing that is not quite right in the post above: SAN support exists via XSAN.
I'm not trying to lure you down the flops per dollar rabbithole... but when i was researching the mac pro before purchasing, putting together a computer with GPUs providing the same flop performance was something like 80-90% of the cost of the mac pro. the D700's are really good. This is just an anecdote based on my memory, so take with a grain of salt, but the comparison is probably not as bad as you are imagining.
This is what I found as well, and allude to in other comments in the thread. Particularly when you look at GPGPUs like the NVidia Tesla, they are generally pretty terrible when priced per-gflop because they live in a niche market that wants to handle finite element analysis and supercomputing tasks.
Image processing doesn't require double precision, so we don't need GPUs tuned for it, which means we can use Fire Pro's and similar workstation or server grade cards.
> there is no way that you can't build a hackintosh
Have you ever personally run a Hackintosh, full-time for a prolonged period of time?
It's anecdotal, but I can assure that once you're used to how OS X and the Apple hardware work together and never, ever, ever crash, using a Hackintosh is an exercise in frustration.
I had one of the known-best Hackintosh configurations in existence, and it didn't hold a candle to the MBP I had prior to it in terms of "it just works".
Sure, it was cheaper.
Guess what I did when that Hackintosh needed replacing? I walked in and dropped the coin on genuine Apple hardware without a second thought. I have never regretted it, and I'll never go back.
I'm surprised at how much attention 'hackintosh' is getting in this thread. It's a completely naive sub-topic. If you are a US corporation 'hachintosh' is completely taboo, beyond taboo, it's illegal. If hachintosh is how some cooperations run, yikes, let me know so I can never be their customer. If a company is that cheap with their hardware and their morals, I would hate to see how they treat their employees. (It's also naive to think if a company saves money on their hardware by hacking the shit out of it, that money saved will be siphoned in to workers paychecks.)
I have been using a Hackintosh as my primary rig going on 4+ years now. It can be frustrating if you are trying to use the very latest hardware but I find the small issues a decent tradeoff.
It's not a matter of cheaper for me, but a matter of fitting my needs. I don't want to run AMD graphics cards, I need PCI-E, I want lots of internal storage, I want really high single threaded CPU performance.
I can't buy that from Apple in a desktop form-factor. So I have my Hackintosh.
That being said, I don't disagree that Apple hardware is nice. I have a rMBP 13 and intend on replacing it with a newer model Apple notebook soon.
For months and months on end of heavy usage without a single restart or issue?
EDIT: to expand a little - I was developing/compiling all day long on my ~2008 MBP with it plugged into an external monitor, network, mouse, kb. I'd close the lid and walk home with it, then watch movies, torrent, develop some more, surf etc. Close lid, and repeat for months on end. The only time I ever restarted was for OS updates, I never had a single app even crash in ~2 years of doing that.
My hackintosh (and the windows 7&8 HP machines here at work) don't hold a candle to that.
That is your experience. A properly set up Hackintosh (with a custom DSDT and the proper kexts as needed) can be as reliable as a genuine Mac.
Regarding Windows machines, I've had desktops that would be used for months at a time (mostly rendering) without a restart and never crash.
A pretty good way to test for reliability is to let Prime95 and Memtest86 run for a week or so and see if it fails somewhere along the line (obviously proper cooling is a must), many consumer machines will fail this test.
You sound pretty confident in it, so here's a question from a perspective more relevant to the discussion:
Would you found a company and make your primary product hackintosh servers? Are you willing to stand behind your 'perfect' configuration and give those customers years of support?
These guys are running a real startup. A vendor with that exact promise and a failed delivery could tank them.
1. Apples EULA does not allow OSX on non-apple hardware.
2. Some major updates can break customizations and require some modifications (bootloaders, etc) to be re-installed
I have no problem helping a friend set up a Hackintosh when they want to save a few thousand dollars (I have set up a few already) with the understanding that they need to backup before doing any system updates and expect things to break after updating.
While Hackintosh's work well for personal use as long as you are somewhat techy and pick the hardware carefully, (putting aside the EULA issue) it does not make sense for anything large scale.
Sure, you can build such a hackintosh, but you're behind on OSX updates, you can't update automatically (it may break some of the custom hacks required), you probably can't use auto update.
Also, some of the drivers may have additional bugs due to the ever-so-slight hardware change. In the end, your system may freeze more often, or display weird behaviour.
All that apart from the EULA issue that was already mentioned. Maybe imgix already tested such hackintosh systems and realized that they're just not stable enough for continous high workloads.
terhechte is right about this. To clarify: the reason you can't update automatically is although most hardware is supported just fine, there are many minor adjustments needed, mostly to text files but sometimes to binaries as well to make OSX recognize your hardware. Every time an OSX update comes out there's updates to drivers that overwrite your modified files.
An example of the sort of hack I'm talking about would be a graphics driver that says it's for the NVidia model E532D. Your graphics card is an E532E. You looked on the internet, and you found out they are exactly identical except for branding, so you dive in the driver and simply flip a bit to make OSX recognize it.
re your hackintosh idea: I have no idea what the current state of play is, but I looked at this in some detail when they came out - and there was no way you could build a directly comparable machine without spending very similar money. The GPUs would just kill you, far more than any other components.
Apple fans often justify the prices with this argument, but in reality, there is no reason to buy a machine directly comparable in all aspects, just the ones I care about. And that can be had for a fraction of the cost.
It's unlikely that this company needs all the hardware features of the Mac Pro - probably just the beefy GPUs. That's combined with the power density problems (and higher monthly costs) of this solution, compared to modern rack or blade servers, making it far worse value.
Compare this also to John Siracusa's woes over buying a new Mac: He wants a graphics card powerful enough to game on, and remain useful for a number of years. He wants to be able to get a retina display.
He's for now stuck with a 2007 Mac Pro as Apple don't sell a suitable machine.
Well sure, if you don't need the machine, you don't need it and that' fine.
When I was comparing hardware, you really couldn't buy comparable cost/flop GPUs for any significant savings (and you'd spend more on some similar builds), which was my point. No idea if that's still true. The idea that you could get the same thing for half the money just wasn't true.
Your comment about John Siracusa's problem doesn't seem relevent to the OP, although it something to consider if you were buying a machine for home use.
I had the exact same thoughts. On one hand I love the idea of racking mac pros and having drawers full of Wera tools to work with, but on the other hand, it just seems like the kind of silly expenditures that come with too much funding.
I've answered the concern about Wera hand tools before, but to reiterate my position: a $6 screwdriver doesn't matter if it helps you do your job.
Not having a screwdriver when you need it in a pinch is penny wise and pound foolish. At best you're now out 30 minutes while you drive to Home Depot, potentially during some sort of catastrophe. At worst maybe you simply cannot do the task that you need to do, because it's 2am and you're in Frankfurt. I've worked in a lot of datacenters that didn't stock basic tools to perform tasks, and frankly it sucked.
I keep a log of all of the purchases I made for the current datacenter build. Non-server / non-structural expenses account for less than $3000, which is less than the cost of a single server. This includes storage bins, carts, shelves, workbenches, chairs, supplies and tools.
Indeed, it is. Servers are expensive; everything built for a datacenter is expensive. It's very easy to make a passive chassis for far less than conventional datacenter gear prices.
A lot of cost estimates have been thrown around (here and elsewhere). The highest that I've seen is $4000 per unit. That is simply absurd. The initial run of prototypes was far less per unit, and this was a small batch made to iron out the kinks. Economies of scale and design tweaks will drive this down even further.
The chassis design is actually quite elegant from a manufacturing standpoint. That's something that I hope will be made evident by follow-up posts that delve into more technical detail.
First, this is awesome. Just like I want to live in a world where people are paying picodollars for cloud storage[1], I also want to live in a world where a bunch of mac pro cylinders are racked up in a datacenter. Very cool.
Second, this is complete silliness. I'm not going to go down the rabbithole of flops per dollar, but there is no way that you can't build a hackintosh 1U with dual cpus and multiple GPUs and not come out big money ahead. Whatever management overhead gets added by playing the hackintosh cat and mouse is certainly less than building new things out of sheet metal.
Let me say one other thing: right around mid 2000 was when certain companies started selling fancy third-party rack chassis gizmos for the Sun e4500, which was the cadillac of datacenter servers at the time. Huge specs on paper, way underpowered for the money they cost ($250k+) and the epitome of Suns brand-value. And there were suddenly new and fancy ways to rack and roll them.
This reminds me a lot of that time, and that time didn't last long...
[1] Our esteemed competitor, tarsnap.