Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think you may underestimate what these models do.

Proper multimodal models natively consider whatever input you give them, store the useful information in an abstracted form (i.e not just text), building it's world model, and then output in whatever format you want it to. It's no different to a mammals, just the inputs are perhaps different. Instead of relying on senses, they rely on text, video, images and sound.

In theory you could connect it to a robot and it could gather real world data much like a human, but would potentially be limited to the number of sensors/nerves it has. (on the plus side it has access to all recorded data and much faster read/write than a human).



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: