Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> to put inference on edge devices...

It will take a long time before you can put performant inference on edge device.

Just download one of the various open source large(st) langage model and test it on your desktop...

Compute power and memory and storage requirements are insane if you want decent result... I mean not just Llama gibberish.

Until such requirement are satisfied, distant model are the way to go, at least for conversational model.

Aside llm, AlphaGo would not run on any end user device, by a long shot, even if it is an already 'old' technology.

I think 'neural engine' on end user device is just marketing nonsense at this current state of the art.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: