Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Apple will gladly sell you a GPU with 192GB of memory, but your wallet won't like it.


Won't Nvidia, and Intel, and Qualcomm, and Falanx (who make the ARM Mali GPUs from what I can see), and Imagination Technologies (PowerVR) do the same? They each make a GPU, and if you pay them enough money I have a hard time beleiving they won't figure out how to slap enough RAM on a board for one of their existing products and making whatever changes are required.


The US government is looking into heavily limit availability of high end GPUs from now on. And the biggest and most effective bottleneck for AI right now is VRAM

So maybe Apple is happy to sell huge GPUs like that but the government will probably put it under export controls like A100 and H100 already is


Cue the PowerMac G4 TV ad.

https://youtu.be/lb7EhYy-2RE


OTOH, it comes free with one of the finest Unix workstations ever made.


It's easy to be best when you have no competition. Linux exists for the rest of us.


It’s good even if compared to Linux. Not perfect, but certainly not bad.


Which Unix workstation?


They are referring to MacOS being included with expensive Mac hardware.


How many desktop systems can have 192GB visible to the GPU? How many cost less than a Mac?


Just because it has a lot of GPU RAM doesn't mean it's actually useful for people doing ML work.

How many companies use Macs for ML work instead of Nvidia and Cuda?


It won’t be as fast as a high-end GPU like the MI300 series, but it’s enough to check whether the code works before running it on a high-end GPU-heavy machine and the large GPU-accessible RAM simplifies the code enormously, as you don’t have to partition and shuffle data between CPU and GPU.


Ok that's the theory but how many companies actually do that for their workflow. All the ML companies I saw use directly Cuda for prototyping to production and don't bother with Apple ML unless their target happens to be exclusively iPhones.


Anyone doing heavy lifting and low-level tooling will be better to optimise for specialised training and inference engines. Usage will depend on where the abstraction layer is - if you want to see CUDA, then you'll need Nvidia. If all you care for is the size of the model and you know it's very large, then the Apple hardware becomes competitive.

Besides, you'd be well served with a Mac as a development desktop anyway.


Everyone has laptops now though. Nobody's gonna carry a Mac studio between home and office. And if you're gonna use your Mac just as an SSH machine then you'll remote to a Nvidia data center anyway not to a Mac studio.


I still have a Mac Mini on my desk in my home office, regardless of the laptops. If I were into crunching 192 gigabytes of numbers at a time, I’d get myself a Mac Studio.

At least until someone makes an MI300A workstation.


Sure, but then if you take your code to production to monetize it as a business you won't be deploying on a datacenter of Mac Minis.

What you alone do at home, is irelevant for the ML market as a whole, along with your Mac Mini, as you alone won't move the market, and the companies serious about ML are all-in on Nvidia and CUDA compatible code for mass deployment.

I can also get to run some NNs on some microcontroler, but my hoppy project won't move the market, and that's what I was talking about, the greater market, not your hobby project.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: