Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Currently, I'm using generative AI of various kinds on my M1 Air (llm, image gen, TTS, STT), but am frustrated by the limitations - primarily memory and secondarily availability of an MLX adaptation.

Just an AI hobbyist, so I don't have time or inclination to tweak everything. Given the non-NVIDIA GPU, how painful will it be to play around with new AI models on this system?



I run all the AI models without any issues with a desktop Radeon. I don't even think about it, just start the ollama docker and run the models.

Inference is not an issue with AMD.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: