Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Isn't it a bit exaggerating to say that users cannot use Snapdragon laptops except for running LLMs? Qualcomm and Microsoft already has a translation layer named Prism (not as good as Rosetta but pretty good nevertheless): https://learn.microsoft.com/en-us/windows/arm/apps-on-arm-x8...

I agree with losing faith in Intel chips though.





>Isn't it a bit exaggerating to say that users cannot use Snapdragon laptops except for running LLMs?

I think maybe what OP meant was that the memory occupied by the model meant you couldn't do anything alongside inferecing, e.g. have a compile job or whatever running (unless you unload the model once you've done asking it questions.)

to be honest, we could really do with RAM abundance. Imagine if 128GB ram became like 8GB ram is today - now that would normalize local LLM inferencing (or atleast, make a decent attempt.)

ofcourse youd need the bandwidth too...


Prism is not as good as Rosetta 2? At least Prism supports AVX.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: