Won't Nvidia, and Intel, and Qualcomm, and Falanx (who make the ARM Mali GPUs from what I can see), and Imagination Technologies (PowerVR) do the same? They each make a GPU, and if you pay them enough money I have a hard time beleiving they won't figure out how to slap enough RAM on a board for one of their existing products and making whatever changes are required.
The US government is looking into heavily limit availability of high end GPUs from now on. And the biggest and most effective bottleneck for AI right now is VRAM
So maybe Apple is happy to sell huge GPUs like that but the government will probably put it under export controls like A100 and H100 already is
It won’t be as fast as a high-end GPU like the MI300 series, but it’s enough to check whether the code works before running it on a high-end GPU-heavy machine and the large GPU-accessible RAM simplifies the code enormously, as you don’t have to partition and shuffle data between CPU and GPU.
Ok that's the theory but how many companies actually do that for their workflow. All the ML companies I saw use directly Cuda for prototyping to production and don't bother with Apple ML unless their target happens to be exclusively iPhones.
Anyone doing heavy lifting and low-level tooling will be better to optimise for specialised training and inference engines. Usage will depend on where the abstraction layer is - if you want to see CUDA, then you'll need Nvidia. If all you care for is the size of the model and you know it's very large, then the Apple hardware becomes competitive.
Besides, you'd be well served with a Mac as a development desktop anyway.
Everyone has laptops now though. Nobody's gonna carry a Mac studio between home and office. And if you're gonna use your Mac just as an SSH machine then you'll remote to a Nvidia data center anyway not to a Mac studio.
I still have a Mac Mini on my desk in my home office, regardless of the laptops. If I were into crunching 192 gigabytes of numbers at a time, I’d get myself a Mac Studio.
At least until someone makes an MI300A workstation.
Sure, but then if you take your code to production to monetize it as a business you won't be deploying on a datacenter of Mac Minis.
What you alone do at home, is irelevant for the ML market as a whole, along with your Mac Mini, as you alone won't move the market, and the companies serious about ML are all-in on Nvidia and CUDA compatible code for mass deployment.
I can also get to run some NNs on some microcontroler, but my hoppy project won't move the market, and that's what I was talking about, the greater market, not your hobby project.