Hacker News new | past | comments | ask | show | jobs | submit login

> 0 because there's no MPS support. ... Studio with an M1 Max 64GB is ~13x slower at generative AI with SD1.5 and SDXL than an RTX 4090 24GB at the same cost (~$1,800, refurb)

Does the 4090 have a computer attached to it? It seems like with no computer, the speed would also be 0.




AI is best done in the Linux/Ubuntu/Pytorch/Nvidia ecosystem. Windows has some exposure due to WSL/Nvidia.

Mac is not a great place for AI/ML yet. Both the hardware and the software present challenges. It'll take time.

When I was hacking AI stuff on a Macbook, I had a second Framework laptop with EGPU that I SSH'd to.


I think the tensor core in the 4090 really help, and of course CUDA supporting every hardware they offer (cough cough, rocm) means that researchers are going to start there.

That said, I think Apple will have some interesting stuff in a year or two (M4 or more likely M5) where they can flex their NPU, Accelerate framework, and unified memory GPU and have it work with more modern requirements.

Time will tell what their software and hardware story is for local inference for generative AI.

Siri (dictation, some assistant stuff, and TTS) runs on device, and I doubt they want to undo that.

I doubt they will do much for training, but maybe a NUMA version of a MacPro with several M4 Ultras will prove me wrong?


> That said, I think Apple will have some interesting stuff in a year or two (M4 or more likely M5) where they can flex their NPU, Accelerate framework, and unified memory GPU and have it work with more modern requirements.

Plus two years for software support by the broader ecosystem.

Even Windows, with Cuda + drivers, suffers from less support.


If we’re being snarky “Apple Silicon” won’t work without a motherboard and power supply either.


I think the line between the line in GP is people blindly believing into Apple marketing graphs is annoying; Apple Silicon GPU marketing comparisons against NVIDIA GPUs are made using laptop variants, which were at some point exact same silicon as desktop GPUs software limited to fit within laptop power/cooling brackets, but not in 30/40 generations.


I get what you’re saying but I don’t think there was snark. Just the fact that a 4090 without a computer attached won’t work. It’s not like you can buy apple silicon without a Mac attached.


You can just get a pci enclosure, and use the hardware.. Attaching it to a VM makes sense bc of drivers etc.


eGPUs don't work with Apple Silicon Macs, only Intel. We ran into a lot of the limitations early on, and this is the only reason we still have 2018 Mac Minis and 2019 Mac Book Pros.

https://support.apple.com/en-us/102363

Five years, and still no solution. And somehow they're spinning memory bandwidth as some sort of prescient act of Apple genius for AI. It's insulting.


Hmm... you're right.. I tried searching for ANY support... but there's really nothing yet.


You can imagine our confusion and surprise when we got our first M1s and had to lose a few displays and our eGPUs.

Apple made a strange choice with their hardware that effectively pushed our development to Linux and Windows. If Macs didn't make such nice front ends, they almost wouldn't have a place at all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: