Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My comment is about the general idea (LLM transformers on a chip), not particular company, as I have no insight into the latter.

Such a chip (with support for LoRA finetuning) would likely be the enabler for the next-gen robotics.

Right now, there is a growing corpus of papers and demos that show what's possible, but these demos are often a talk-to-a-datacenter ordeal, which is not suitable for any serious production use: too high latency, too much dependency on the Internet.

With a low-latency, cost- and energy-efficient way to run finetuned LLMs locally (and keep finetuning based on the specific robot experience), we can actually make something useful in the real world.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: